sts-6664_summer_2020_mbeach_final_essay_o3b.pdf |
Below is a synthesis short writing given at the end of a class I took on Philosophy of Science and Technology:
Argumentative Claim: Truth exists, but our understanding of it is an approximation. Justification: Newton conjectured existence of the ‘all-pervading aether’ (Ian Hacking, Representing and Intervening, 254). Hacking shows how for centuries many scientists held the existence of the substance. With this basic premise they were able to explain many phenomena, at least in part. The idea more or less gave up the ghost when Einstein’s relativity was generally adopted by the scientific community. That didn’t happen immediately. Interestingly, calculation such as those published by Maxwell were referenced by both Newtonians and Einstein. This points to Hacking’s idea that calculation acts as a bridge between speculation and experiment. Then why did most scientists make the shift from Newton to Einstein? Thomas Kuhn would argue either could be justified as a paradigm suggesting relativity would eventually be supplanted by something newer, not necessarily truer. If one ascribes to Kuhn’s argument taken to extreme, then the existence of truth itself is in question. Kuhn might be accused of “ever learning, and never able to come to the knowledge of the truth” (2 Timothy 3:7, New Testament). Describing skepticism, Baggini and Fosl point to a need for ‘criterion of truth’ (The Philosopher's Toolkit, 126). One could make the argument that such an approach makes truth relative to criteria. Who decides which criteria matter? Calculation and experimentation ultimately lead to a level of probability of truth. Hacking argues experiments fill theoretical blanks (Representing and Intervening, 239). That only improves theoretical probability, what Nancy Cartwright calls an approximation of truth (Representing and Intervening, 218). I would argue one must adopt one’s own criteria for truth. These criteria are what might be considered accepted basic principles. To maintain belief in a principle such as truth exists independent of man’s discovery or invention, a principle I personally hold, one must consider alternatives that introduce some level of doubt. Alternatives will have some logic, and likely some level of experimentation. To maintain confidence in the basic principle requires at least a rough understanding of the probabilistic comparison of the conflicting ideas, and a willingness to adjust if the probability of the alternate rises in comparison over time. Despite shifting probabilities, one must remember that either or both alternative principles may be wrong, and be comfortable living with some level of doubt.
This year my wife and I traveled to Utah to spend time with family for Christmas. It was fun opening presents, eating too much food, playing games, and getting in a ski day with our son Jacob. We also saw two movies, the next installments of Star Wars and Jumanji.
Before we left for Utah the wonderful missionaries serving in our congregation here in Virginia shared this year’s nativity video. This is a powerful tool to convey the Spirit. I was particularly peaked by the Spirit at the moment of His birth, again as the shepherds saw the angels, and one last time as one of the wise men first sees the star appearing in the sky. I shared the link to the video on my Facebook page. I hope it helped someone. An interesting note, I have a particular interest in navigation. In the video the wise men use a specific tool to check their position relative to the north star. At first I erroneously thought they were using the device in reference to the new star. I didn’t recognize it so I consulted Google. The device is called a Kamal. In the video the Magi put one end of the chain in his mouth. He held up a rectangular card attached to the other end of the chain to gain a position. How it is used is explained on the website Online Star Register: The Kamal was composed of a wood or horn parallelogram one inch by two inches long. Strings were inserted through the center. The string composed of knots at different points along its length. Each knot, called an isba, equaled one degree 36 minutes. The knots ranged from 1 to 16 isba. The navigator would put one of the knots between his teeth and hold the Kamal at arm’s length. When the upper and lower edges of the device became coincident with the pole star and the horizon, the navigator knew his latitude was correct. The latitude of different ports corresponded to the position of particular knots on the string. One other point that came out as my wife and I discussed the Come, Follow Me topic in the car on the way to Utah. Although the star was a sign the Messiah was born and helped guide them to Jerusalem, it did not light directly over the manger in Bethlehem (contrary to popular depictions). The wise men had to seek out Harod’s scholars who in turn had to dig around through the records to find the name of the town. The video points out that their visit was much later, after Christ was more of a toddler than a baby. By that point they were no longer living in a barn. So the wise men first had to have studied over years to know the sign and to understand basic astronomy. Then they had to be diligent in watching for the sign. Then they had to be willing to follow the sign. Despite that, they still had to consult scripture to know of his location. Then once close they had to ask the townspeople to point to the specific house. Is that not unlike our efforts to seek Him as well? One final note. This year was the first we celebrated Christmas without our mothers. They both passed away last year within three weeks of each other. As we were traveling home we noted how this year just didn’t feel like Christmas. Not all of our family could be there, but that is always true. However, of our parents, only her father is still with us. Despite the games, presents, family and all the rest, without our mothers it just wasn’t the same. A few weeks ago a few of us at my office managed to visit the DC car show. Looking at the cars was nice, but we were more focused on the dashboards… and the maybe the collector cars on the top floor. There were three floors of cars with pretty much every manufacturer represented. I compiled the data we collected and here is what we learned. There were 26 automakers and we looked at some cars from each. We collectively reviewed 57 models. If I had to guess, I’d say there were close to 200 models shown. Most of the manufacturers used either the same dashboard head-unit (entertainment system) in each model, or had a basic and premium version split among all their models. One exception was Toyota. The rep I happened to speak to at the Toyota booth noted that each factory made independent decisions about which head-unit to deploy. As a result, the Toyota models were all over the map in terms of dashboard implementations from model to model. Voice Command Every model of every maker had voice command. They tended to be activated by a button on the steering wheel. Physical Radio Selection Button About 35% of the cars actually had a button separate from the touch screen that actually had the word ‘radio’ written on it. Pushing the button automatically brought up the radio controls on the touchscreen and switched audio to whatever station was tuned in. Almost all of the cars had physical buttons outside of the touch screen to control the radio such as power, volume and tuning. Many of the cars (we didn’t count these, but I’d guess it to be around half) had a physical button that said either ‘audio’ or ‘media’. These would bring up a menu on the touch screen for all audio sources including radio. Radio Icon on the Top Tier By this we mean when you select the primary button to orient the system, usually called ‘Home’, we were looking to see if the radio system was listed as an option. Like the physical button version, about 35% did this. Pretty much all the rest had an icon that said something like ‘audio’ or ‘media’ in which the radio was then one of a number of audio sources available. We would refer to this as second tier, but at least in those cases radio was on equal footing with all other audio sources. Carplay/Auto Exclude Radio This refers to an experience we had at the 2018 CES when there was one model with this approach. In that case when you plugged your phone into the car either Apple Carplay or Android Auto would come up, and all other audio options were grayed out and not available unless you unplugged the phone. Luckily, the car folks have seen how this might make their buyers unhappy. Not one car did that this time. About 50% of the cars would mute radio and switch away to Carplay or Auto, but you could navigate back to the radio source with no problem. The other 50% brought up the apps, but kept playing the radio until you selected to use the app instead. Both AM and FM
This question was also a holdover from the earlier CES when we saw a fairly large percentage of the electric cars that did not have an AM tuner in them. This year that was less true. A full 96% of the models we looked at included tuners for both bands. The concern for AM in electric cars is the interference generated by the electric motor, but it’s clear they are willing to spend the small amount it costs to shield the AM radio. Perhaps they got some negative feedback from car buyers. HD Radio We looked to see how many offered this option. Of the cars we looked at, 86% had the capability. This bodes well for applications such as Metapub. It’s not clear how many of these systems included HD Radio as a standard vs an option, but it was so prevalent that it’s likely it was standard. It was difficult to tell in some cases how metadata was handled. If a car had HD Radio in all cases it at least displayed RDS text and HD text. What was more difficult to confirm was the presence or non-presence of Artist Experience, meaning graphics in HD Radio. In order to confirm it we had to tune to stations we knew were transmitting graphics, then wait to see if the broadcast signal was strong enough to receive it. This was more of a challenge in the basement than on the main floor, but was still a bit of a science project either way. There were only 6 models that seemed to have RDS only. One model (Acura RLX) didn’t have any sort of metadata. All the rest did display metadata both in RDS and HD modes, though with the caveats on graphics mentioned above. The Newest History: Science and Technology By Melvin Kranzberg May 1962 In this article, Melvin Kranzberg argues for a new approach to history through the lens of science and technology. Old history is about politics and the state. Democratizing history adds society (the people) to history. A few of his arguments in favor of a focus on science and technology in history include:
The original article was published in 1962. As it turns out much of his predictions have panned out in that there are whole disciplines related to science and technology that is academically concerned with history, philosophy, sociology and policy for example. Despite that, his point about using history to make better decisions about modern employment of science and technology may be overstated. Most college graduates today completing a degree in a STEM field have likely not taken any courses in any of the liberal arts areas that focus on STEM areas. Despite the fact that most “soft science” programs consider it important for “hard science” majors to have some understanding of such topics, perhaps the hard science program directors are not yet sold on the idea.
The idea of progress as linked with the most recent version of the idea of technology implies change. It also implies that the change is supportive of the goals or preferences of whoever is designating the change as progress. In Modernity and Technology by Thomas J. Misa, the author argues that as some see modernity and technological advancement as progress, other philosophers see these ideas linked as a negative. Among his proposals the author states “Technology may be the truly distinctive feature of modernity” as proposal 2. Misa posits that those who argue for technological determinism of social norms (modernists), and those who prefer a focus on societal change independent of technology (post-modernists) are both thinking too macro. He argues, “To constructively confront technology and modernity, we must look more closely at individual technologies and inquire more carefully into social and cultural processes.” As Misa offers “proposals” in his article, likewise Melvin Kranzberg offers “laws” in his article Technology and History: “Kranzberg’s Laws”. His sixth law states, “Technology is a very human activity – and so is the history of technology.” In this section of the article Kranzberg argues “man the thinker” is also simultaneously “man the maker.” In fact, he is saying that what man the thinker is thinking about is what to make and how to make it. Like Misa, he questions the technological imperative. Although we often shape our lives around technology such as the clock or the automobile, “this does not necessarily mean that the ‘technological imperative’… necessarily directs all our thoughts and actions.” As Misa states that the concepts around technology should look more at the specifics, the micro instead of the macro, Kranzberg actually gives some specific examples. In speaking of “technical devices that would make life simpler or easier for us but which our social values and human sensibilities simply reject”, he shares how we, in America at least, do not accept the use of communal kitchens. “Our adherence to the concept of the home has made that technical solution unworkable,” he adds. Where some might take advantage of the shared benefit of a communal kitchen, including better equipment with pooled resources and less work in cleaning and maintaining through shared effort, American culture does not see the technical advantage as a form of progress. The Misa writing helps to see some linkages between various aspects of technology that are not so obvious. For example, under his proposal 4 comparing modernism and postmodernism he speaks to architecture as a technology. Modernists, he states, follow the idea that less is more, while postmodernists would argue less is bore. Another example of a strength is linking the concepts of reason and freedom. He shares both arguments of freedom through reason, and concern that it can lead to domination by reason, hence the opposite idea that reason usurps freedom. Similar examples through the work point to both the strength and weakness of the writing. Helping present multiple sides of the questions is helpful to arriving at a better understanding of the questions, but the author generally does not take a side. He frames the questions and shares the answers of others that disagree. He also generally only shares two sides to each of the posed questions. I am sure there are many more than two sides that could be understood.
A short team paper on development issues with Windows Vista. I think our son Nate got stuck with this operating system when we got him a laptop for college. At the time that was the only OS pre-loaded on Windows machines, and we could't afford to pay for the licenses to replace it with something more reliable. Enjoy this trip down memory lane.
For those of you who have some interest in history, I recently read an article about an early (mid-1800s) mechanical computer. It was envisioned by a fellow named Charles Babbage and was not based on binary, but rather decimal numbers. The first version, the Difference Engine, he was able to build in part and demonstrate. The later version was called the Analytical Engine. It could add, subtract, multiply, and divide. There are a bunch of YouTube videos on the ideas he had and one version of the machine that has been built, but the actual device was not constructed until about 130 years after he invented it. Some of his base ideas inspired later approaches into modern computers.
Bruno Latour and other proponents of Actor Network Theory (ANT) focus on interactions between and among actors (people) and actants (things) in a network intended to build knowledge. Emerging nodes and clusters, where interactivity is greatest, define where knowledge is extended. Thoughts of social context and varying goals in ANT are not considered important, or useful, in extending knowledge. Unfortunately, when difference is not examined some potential influences are missed, and knowledge is not extended everywhere, or as far as, it could. In her article, Modernity's Misleading Dream: Latour, Sandra Harding points to a defined need within ANT to externalize social thought. She indicates that Latour does acknowledge a need to link the philosophies of science with political science to succeed with his three-step process translating power to the lab. This is true because political power is a source of influence that can help in growing the influence of the ‘important’ actors in the network, meaning scientists. Making the border between the laboratory and the world permeable enough to be able to extend the lab and incorporate the field-site is a critical step that requires some translation of political power. Latour’s need for unity in purpose, a common world, blinds him to differences according to Harding. This matters in part because when there is a multiplicity of interests and beliefs, those interests spawn more criteria to help define success. Narrowing criteria may allow the definer of the criteria, the scientist, to claim success, while many others may see failure. This tension between definitions of success and failure risks future political support, or power, and ultimately weakens the scientific community, or at least the specific lab involved. Barbara Allen’s example of the Holy Cross neighborhood in New Orleans post Hurricane Katrina is a stark example. She examines rebuilding efforts in her study Neighborhood as 'Green Laboratory'. The interests of organizations of the green industry translated their goals onto residents who out of desperation, or perhaps through manipulation, were willing to shift their goals of rebuilding their homes and community into the language of environmental goals. In mapping Latour’s ANT model onto the circumstances of the Holy Cross rebuild, Allen shows how the goal of rebuilding homes using green technology, though laudable, only represented half of the goals of the local residents. Because success was defined in terms of homes built in the new way using green technology, community plans did not include economic infrastructure. This may, at least in part, explain why many homes continue vacant and not repaired. Other symptoms such as the reemergence of drug dealing, a lack of jobs, and no grocery stores in the district point to unintended consequences resultant from the narrowing of project goals too far. Turning a blind eye to some important social factors that were a part of the original community context helped to a certain point such as securing funds, materials and expertise, but an opportunity was lost to more significantly impact the community in positive way. In fact, some residents could argue they are worse off than before the project in that they now have a group of homes rather than a community like had existed before the hurricane. The ability of scientists, or any other group, to define desired outcomes from purely science-related or technology-related goals can make the group successful in its defined criteria. Unfortunately, like the generals who win battles and lose wars, by ignoring success criteria of other groups involved in a given project, science may miss as much knowledge as it gains. Worse, it may come to conclusions that are at least partially incorrect.
Here is the second of two final exam papers written for the History of Technology class last semester (Fall 2018). Try not to doze off. Here is the first of two final exam papers written for the History of Technology class last semester (Fall 2018). Try not to doze off. Last semester my class was on the History of Technology. One assignment I had was to find three primary sources and document information about them. In a later assignment we were asked to write a paper about what the sources were telling us. Here is the form we were to use to analyse each source. Below is the list of the three sources I was able to dig up: Then, putting it all together, below is the output of the work on primary sources: Like many others these days, I listen to a selection of podcasts. One of my regulars by NPR is titled Hidden Brain by Shankar Vedantam. Recently he had an episode titled Creating God and featured an evolutionary scientist named Azim Shariff. In essence, the ideas the guest shared pointed to an evolutionary need in early human development for creating community. The result, says Shariff, was the invention of religion, the invention of God. Creating a belief system, goes the argument, helped small groups form a common ethos and a method of bonding. Toward the end of the episode Shariff affirmed he is an atheist. Here is the description of the episode on the Hidden Brain website:
If you've taken part in a religious service, have you ever stopped to think about how it all came to be? How did people become believers? Where did the rituals come from? And most of all, what purpose does it all serve? This week, we explore these questions with psychologist Azim Shariff, who argues that we can think of religion from a Darwinian perspective, as an innovation that helped human societies to survive and flourish. https://www.npr.org/series/423302056/hidden-brain I have made an argument many times about science and faith, but after listening to the podcast I feel a need to make it again. I firmly believe that human intellect has limits, and the amount of data available to human kind is limited as well. A limited reasoning ability coupled with a limited amount of information often leads to only a partial, or sometimes completely inaccurate, understanding of truth. A few days after listening to the podcast I listened to a Ted Radio Hour that was focused on this issue of what science knows about truth. The episode is titled The Spirit of Inquiry. In particular, a recurring theme in the episode was about the trap of arrogance scientists often fall into by believing the conclusions science draws. Multiple presenters, scientists not religionists, spoke about how science really doesn’t prove anything, but gives us a reasonable framework to try to understand the world around us, and the worlds in the cosmos. Here is the description of the episode on the TED Radio Hour website: The force behind scientific progress is the simple act of asking questions. This episode, TED speakers explore how a deeper and more humble style of inquiry may help achieve the next big breakthrough. https://www.npr.org/programs/ted-radio-hour/archive There is a danger in this approach as well. A follower of this line of thought can come to the conclusion that truth is not really knowable. In his epistle to Timothy, the apostle Paul describes people in the last days. One way he describes them (us?) is in 2 Timothy 3 7 Ever learning, and never able to come to the knowledge of the truth. When atheistic scientists remove the possibility of the existence of God, and accept completely the ideas of evolution, I can understand how they, like Azim Shariff, come to the conclusions they do. That said, if you assume one possibility should be completely ruled out (the actual existence of God for example), and you assume another possibility as the only description of reality, then how can someone really put stock in such a one-sided perspective? Isn’t that the same argument such scientists use to discredit those who claim a belief in God? Personally I put little hope in any version of truth that relies only on the logic arguments of human kind, be they scientific or religious. By my own experience through prayer, and seeing results in the lives of those who choose to live the gospel of Jesus Christ, I find reason to view any idea through the lens of how it does or does not align with truth revealed through ancient and modern prophets. Coming to know truth requires more than thought. The Savior puts it this way in John 7 17 If any man will do his will, he shall know of the doctrine, whether it be of God, or whether I speak of myself. For me, faith is stronger than belief. Believing in something does not make it true, nor does belief imply action. Faith is doing His will (taking action). Doing His will increases faith. As faith increases, so does understanding. As understanding increases, a person comes closer to truth. As the scripture notes, doing His will discloses truth. Stated in the negative, if the doctrine is not from God, is not true, then doing the act will reveal to the doer it’s untruth, and faith does not increase. I’m just fine that many do not accept my perspective. I’m also aware that when considering religion there is a great deal of variation and contradiction among belief systems. I wonder, though, how that is any different than the variation and contradiction among various scientific camps. Scientific evidence is just that, evidence. Scientific theory is just that, theory. So much of what gets represented as "fact" later proves not factual. Religions have come and gone throughout human history. So too have scientific theories. A few weeks ago I attended a symposium on Media over IP hosted by the North American Broadcasters Association (NABA). As with many of these events the topics seemed mostly TV-centric, but there are usually some radio gems hidden in the flow. Many of the presentations discussed issues around implementation of a new SMPTE standard called ST-2110. The point of ST-2110 is so that a TV station can pass all its content around inside their plant using IP. Most of our public radio stations have been doing this for years. Many have been using a Live Wire version of network. Others have been using Wheatnet as their primary IP system. The TV people by and large have not been IP based, but rather have used SDI or HD-SDI as their data format. In fact, HD-SDI is exactly the sort of system we put in during my time in Nebraska when we went from analog to digital TV. That was in the early 2000's. One encouraging thing I heard in all this conversation was that the architects of the SMPTE ST-2110 standard decided to adopt AES67 as their format for managing audio. We at NPR use it at places in our system, and our recent RFP requires AES67 interoperability for the new IRD (satellite receiver) that will be placed at each station. Both Live Wire and Wheatnet claim some level of interoperability with the AES67 standard. This is good news for public radio stations that are dual licensees, meaning they are also a public television station. More than 70 of our interconnected stations are dual licensees. If both their radio plant and their television plant are using AES67 for moving audio around, it would help break down some technical silos/boundaries that often exist in these locations. Here is a slide highlighting the main subdivisions of the new standard: One advantage of the proposed standard is it eliminates one layer of data. In the current approach, the SDI transport stream contains the "essence" (meaning video, audio, metadata) wrapped in the SDI format. The SDI is then wrapped in the IP format. When a station wants to decode the video or audio they have to first unwrap the IP packets, then again the SDI format to get to the essence. Using ST-2110 means the essence is directly combined in the IP stream and there is no additional SDI layer. Time will tell how quickly the TV folks catch up with us in their trek toward an IP-based station infrastructure. Public radio is not 100% either, but we are significantly closer than our TV cousins. As you might already know, the public radio satellite system operates in the C-band. The downlink portion of that band is from 3.7GHz to 4.2GHz. A little over a year ago, terrestrial broadband services convinced the FCC to allow them to start offering data services in the extended C-band (just below 3.7GHz) within the U.S. NPR filed a formal argument against the idea, as did many others. Our arguments fell on deaf ears.
Fast forward to now. We are in the midst of an even bigger threat to our C-band operations in that the FCC has asked for comments on the idea of allowing terrestrial broadband providers to operate within the downlink band; the entire band (3.7 – 4.2). If the FCC allows this the result will be increased RF noise (interference) in those frequencies and lower performance at the downlinks located at many of our station customers. As you might imagine we joined forces with a large number of other entities to fight this. The satellite owners such as Intelsat and SES submitted comments as did many satellite bandwidth users like NPR. We are analyzing all the filings, and reaching out to other constituencies. Industry associations such as the NAB, SIA and NABA also weighed in. It's hard to say how this will go. The C-band is already shared with fixed microwave systems. In that case if our antennas are registered, then new microwave systems have to not interfere. The broadband network proposals would make this less secure, even if they only win access for fixed systems (antennas that don't move). If they were to get all they want, to include mobile operations (read cell phones), then the interference will be random and unpredictable. The broadband advocates are saying that satellite antennas could be licensed for specific frequencies at specific look angles instead of the full-band and full-arc as we often do now. That might lessen the potential interference, but it would also mean that every time a network changes transponders or satellites there would be another filing process with the FCC. If a network has to move due to a problem on the satellite, then that migration needs to happen quickly; an impossibility if filing with the feds becomes a requirement every time a change is needed. There is also an economic consideration. If satellite antennas have access to less of the C-band frequency block, then the value of what is accessible will go up. Less bandwidth availability (supply) means increased value for the bandwidth. Increased value means increased cost. This post was originally published in March of 2017 on another platform:
The September 14, 2016 edition of RadioWorld posted an interesting interview with Ray Sokola. He is a VP at DTS. That's the company that not too long ago bought Ibiquity. You may already know that Ibiquity is the owner of HD Radio technology. The focus of the interview is on "hybrid radio". That's the phrase gaining traction these days when referring to integration of broadcast radio content with online-delivered content. NPR Distribution has been contributing to the hybrid radio industry effort through a service we call MetaPub. When Sokola was asked to describe hybrid radio he said it is, "the connection of traditional radio with the internet. This expands the listening experience to take advantage of the best of the past, present and future capabilities that cellular connectivity, the internet, streaming and apps have added to the traditional radio experience. The basic examples start with providing album art and easy purchase capability to a radio experience, but it goes way beyond that and is only limited by our imagination." With MetaPub we've started with text, graphics and links, but we assume public radio stations and producers will figure out more ways to use metadata over time. Sokola seems to be thinking the same way. "Hybrid radio is a platform for innovation that can be taken anywhere by creating the right connection between the radio, the internet, the rest of the vehicle, the auto manufacturer and the consumer. That, I think, will evolve in many ways." Encouraging broadcasters to catch up Sokola said, "Radio is the only consumer medium still not fully digital. Consumers have come to expect that all their audio and video entertainment sources will have added features and digital quality. If a radio station can't offer Artist Experience visuals for album art, station logo and advertiser value-added, they are last century's medium in the eyes of today's sophisticated consumer." Not noted in this article is that DTS recently purchased Arctic Palm. That company/product is one of the middleware tools some of our stations are using to interact with MetaPub. For the full article go here: http://www.radioworld.com/article/dts-seeks-to-immerse-you-in-the-soundfield/279674 We at NPR Distribution have been getting noticed for our MetaPub efforts. For example: MetaPub participation in California Shakeout made the front page. http://www.radioworld.com/article/metadata-test-is-part-of-quake-drill/280108 This post was originally published in March of 2017 on another platform:
One of the selling points of the NPR One app is that it follows what you listen to, then makes suggestions about other things you might also be interested in based on your tastes. It learns your tastes by noting what you listen to and what you don't listen to (skip). This pattern of recommending may sound familiar. If you've ever ordered something from Amazon you will recognize the suggested list that says something like "other people who ordered what you did have also ordered these…" Even more recently I noticed that Amazon noticed what I looked at but didn't order. After logging on I got a note that said something like "based on your recent searches you might be interested in some of these related items." Here's yet another story in IEEE Spectrum of how Spotify is jumping on the curation-suggestion-individualization bandwagon: http://spectrum.ieee.org/view-from-the-valley/computing/software/the-little-hack-that-could-the-story-of-spotifys-discover-weekly-recommendation-engine In this case, the idea/project was started by some engineers within Spotify. The recommendation tool at first didn't take off. One of the creators shared, "My hunch was that navigating to this page and looking at albums was too much work." The original tool required customers to go and check out the suggested content. Gradually they developed the more proactive tool. The article shares, "Their system looks at what the user is already listening to, and then finds connections between those songs and artists, and other songs and artists, crawling through user activity logs, playlists of other users, general news from around the web, and spectragrams of audio. It then filters the recommendations to eliminate music the user has already heard, and sends the individualized playlist to the user." Without telling people, they pushed out the feature to Spotify employees. Reaction was positive. As the tool become popular internally, Spotify decided to put it into the production system for customers. Whether you think this sort of thing is helpful or creepy, it's clear that companies believe it adds value. I'm not sure there is a place for this particular idea in all applications, but what I find interesting is that the idea came from someone seeing a need and a solution without waiting for "management" to point them down a path. From the article, "'This wasn't a big company initiative,' Newett says, 'just a team of passionate engineers who went about solving a problem we saw with the technology we had.'" This post was originally published in August of 2016 on another platform:
In an article posted in IEEE Spectrum on May 27, 2016, Mark Anderson reports on a recent court case where Oracle accused Google of copyright infringement. Google has used Oracle-published Java API's in creating the Android OS and allowed developers in the Android ecosystem to create apps using the OS. Oracle says it will appeal. That remains to be seen. From the article: "The jury's verdict, so long as it withstands what Oracle said on Thursday would be an appeal, arguably opens the door further for developers to enjoy protected use of other companies' APIs. And that, says one leading software copyright expert, is good news for creative software developers and for users of the millions of apps, programs, and interfaces they create." As a tech user I've never been much of a Java fan. My beef was with the waves of Java updates that seemed at times to be daily. Interacting with more than one machine made it worse as each machine would give me the Java-needs-an-update message. I have noticed these messages have been fewer lately. That may be because more and more software systems are dumping Java. I don't know. Since I don't use a 'Droid phone I'm not sure how much of an issue this is, but obviously it has been an issue enough to cause the court battle. Google rubbed a little salt in Oracle's wound during the closing argument by bringing up Oracle's failed attempt at creating a mobile device OS of its own: "The closing argument was one in which the lawyer for Google was able to say: 'Look, they tried to make a phone with Java, but they failed,' Samuelson says. 'We did so, but we put five years' worth of effort into developing this wonderful platform that in fact has become this huge ecosystem that Java developers all over the world have been able get more of their stuff on because of this. Essentially, [Oracle's] argument is sour grapes.'" Though at my work we are no Google, we have had our own negative interactions lately with Oracle. I'm not sure what Oracle's business plan looks like, but I'm not buying stock. Here is the full article: oracle_v_google.pdf This post was originally published in August of 2016 on another platform:
From the June 2016 issue of PM Network there is a short entry about Microsoft placing server farms on the seabed in California. Not necessarily a philosophic topic, but I found the idea… well… cool (cough, cough). Here is the entire text: Data’s Deep Dive The technology industry has a heat problem. Massive data centers help deliver videos, email and social network content to billions of people – and generate tons of heat. This leaves tech companies with massive air conditioning bills and the constant risk of crashes from overheated servers. Microsoft thinks the solution lies at the bottom of the sea. Earlier this year, the Redmond, Washington, USA-based company concluded a 105-day trial of an underwater data center project. A team plunged a server rack encapsulated in a watertight steel cylinder 30 feet (9.1 meters) underwater off the coast of California. The capsule was outfitted with more than 100 sensors to measure pressure, humidity, motion and other conditions. The ocean water keeps the servers cool, eliminating expensive energy bills and reducing the risk of crashes. Subsea data centers might even be able to power themselves using tidal power or underwater turbines. The challenge is creating units that can function without regular checkups. Microsoft estimates that an undersea system may be able to go up to 20 years at a time without maintenance. To alleviate environmental concerns, the project team used acoustic sensors to determine if noise from the servers would disrupt ocean wildlife – and found that any sound from the system was drowned out by the clicking of nearby shrimp. Early tests also showed that heat generated by the servers only affected water a few inches around the vessel. The project’s test phase was so successful that it ran 75 days longer than planned. Researchers believe that mass-producing server capsules can slash setup time of new data centers from two years to 90 days. If that’s the case, a big new wave of data center projects could be on the way. Kelsey O’Conner Data Center loaded on ship Pic from NY Times This post was originally published in August of 2016 on another platform: An interesting focus paper was recently published by Radio World. The topic is Audio over Internet Protocol (AoIP) and is titled Radio AoIP 2016. Each piece in the focus paper reviews some aspect of the AES67 and AES70 standards. AES is the Audio Engineering Society. The AES has created many standards for the audio industry over the years. AES67 is intended to be an interoperability standard such that if audio is shared between two pieces of equipment over an IP network, and both pieces of equipment use this standard, then the audio should transfer even if the equipment comes from different manufacturers. AES70 is a standard for monitor and control of IP networked audio equipment. As it turns out, despite what this document encourages, organizations like us at NPR Distribution and public radio stations are not really able to be 100% on the AES67 standard. Why? Because not all the manufacturers of the equipment we use have adopted it. Some that have adopted it have made unique adjustments in the way they deploy the standard in their equipment. They likely take this route to encourage engineers to use their gear and not mix-and-match with other manufacturers (their competitors). This seems counterproductive to me. Often the members of these standards committees within AES come from the manufacturers themselves. If they are dedicating some of the time (meaning money) of their senior engineers to create these standards then limiting full compatibility in some way would make the time and energy less helpful. Maybe they do it so they can market the fact that they have the specific AES standard available to purchasers. Maybe it's so they can get a look at how their competitors are approaching some of the same topics as they are. In either case it may be a bit of a Potemkin village if in the end only some adopt and others adopt in a slightly non-compliant way. Some manufacturers claim to be fully compliant and only put their unique spin into it using optional sections of the standard. If that is true then their gear would work (and perhaps does) with other fully compliant equipment. In these cases the vendor can rightfully claim to be offering "enhancements" in their application of the standard. Perhaps they are marketing their gear as AES67 compliant knowing that other manufacturers will not adopt so they can put the blame on the others when it doesn't work. If this perspective is true, then saying gear is compliant is for marketing purposes knowing that a full system is not likely to happen unless an organization like us uses all the components from the single vendor. It may be that eventually all manufacturers will become compliant and we can move from the older standards we use to the newest. At the same time it may also be that by the time all the manufacturers catch up to AES67 that a newer and better standard will come along, and the cycle would start all over again. You can see why our engineers have their work cut out for them trying to keep us up to the latest standards possible while not always having the full cooperation of the equipment manufacturers. This is just one of the many challenges to our engineers as they are planning what our system will look like during our next major roll-out beginning in FY2018. Here is the full focus paper: aoip.pdf This post was originally published in August of 2016 on another platform:
I recently listened to a TED Talk. It was about how some companies had patented portions of the human genome. This sort of patenting had been going on for more than 25 years. Sadly, real people in the US Patent office somehow decided that the patent application in question had merit. The ACLU pursued a law suit that eventually went to the Supreme Court. The decision came unanimous that although a company could patent a process to isolate specific gene fragments, it could not patent portions of the genome itself. The base argument is that the human genome is a part of nature and no person is able to create it. Much like a mineral, you can do something to use it, or modify it, that is original, but you can't make it. What made this important was that companies would patent an isolated gene, then charge licensing rights to medical practitioners isolating and examining these genes in order to diagnose patient conditions to know how to treat them. Even worse, once a company had patented the isolated gene, they would sometimes stop advancing the study of the gene themselves. By charging large fees to allow others to study the gene, and by not furthering the study themselves, they were in effect stopping medical advancement related to the specific gene in question. As a result, real people went undiagnosed and untreated. These companies who were attempting to profit from ownership of a human gene remind me of people or companies dubbed "patent trolls." The classic example is a person who digs into someone else's technology, modifies it slightly, files a patent, then waits for someone to use the change so they can pounce with a lawsuit. The patent troll never actually creates anything of value with their supposed idea. Their only intent is to sue and make money. I experienced this once while working in Nebraska. In preparation for the conversion to digital television we had been able to modify our work flow by using some new technology then available. Our effort got some attention in the trade publications of the day, including an interview of one of our engineers. During the interview he mentioned the specific equipment models we were using. About a month later I got a legal letter in the mail ordering us to cease-and-desist all activity using one specific piece of equipment. Supposedly, the equipment in question was using technology that had been patented by the author of the letter. The manufacturer was claimed to have had no right to sell the equipment because they had not paid him a license fee. The letter also told us we even had to delete all content that had passed through this equipment and that they might consider suing us for the value of the money we received from underwriters from any programming that had passed through the equipment. I was shocked. Don't get me wrong. If a manufacturer uses someone else's intellectual property, then they should pay for the value of that intellectual property. That said, even if this guy had a leg to stand on in his pursuit of the equipment manufacturer, why was he targeting us as a user of the equipment? After engaging an attorney with experience in patent law, we learned that even if we had not created the equipment, just using it made us liable. Through our respective lawyers, we agreed to not use the one specific piece of equipment anymore and he agreed not to ask for any money or rights to the content. Why did he go after us? Because we immediately complained to the equipment vendor who was unwilling to defend us and had no legal opinion from a court that they could point to that supported their rights. In short, the troll chased us to get to the manufacturer. In our case it worked. Because the vendor would not defend us and only gave us a credit for the now unusable equipment after lots of complaints by us, we banned purchasing any more equipment from that manufacturer for years. That wasn't just our idea at our little organization. Remember, at the time we made our major purchases through the State of Nebraska procurement office and our primary lawyer was the state Attorney General. I'm not sure if there is a specific lesson here for my current position other than to consider multiple design alternatives in case our primary plan suddenly becomes unusable for whatever reason. This post was originally published in February of 2016 on another platform:
I read the attached article a while ago and it made me chuckle. Of course things are the most funny when they ring true. That's why we laugh, to avoid crying. In this case the author was griping about how overwhelming all the technology can become today. She pines for yester-year when things were simpler. Many of us have been at the center of the high-tech boom. We breathe tech. Given that, I suppose we all might sometimes feel a bit like Dr. Berman. When asked what I like to do when I get a little free time my response often goes a little like this. "My work is inside, high-tech and intellectual, so in the off hours I prefer things that are outside, low-tech and physical." I would argue that too much of any good thing can become a bad thing, but so could a dearth of a good thing as well. Perhaps what Dr. Berman is really seeking isn't killing off technology, but rather some way to better discipline her use of it. Balance is an important part of life. I do find it ironic that after you read her rant about too much email, and then scroll down to the description of the author on the second page, you find the statement, "She can be reached by email at e.berman22@gmail.com." tech_rip.pdf This post was originally published in February of 2016 on another platform:
At the NAB conference last year I picked up a book by this title. It was written by Andrew Dubber. It's interesting where he took this work. I had assumed he'd focus in on new technology and how it's changing what we do. He eventually gets there in later chapters. What he did instead at the beginning was to question what radio even is. On page 10 he says, "As part of a changing media environment, radio becomes a moving target." He continues, "Something is happening to radio - indeed something has happened to radio - and in order for us to understand what has changed about it and what that means, we need to stop and attempt to gain some clarity about what 'radio' was in the first place." Dubber eventually describes a context to define and understand what is meant by the word radio. He proposes a list of 10 categories through which radio is defined. Here is the list: Device This is the tool used to listen to radio. It could be the traditional device in your car dashboard, on the kitchen table or the home stereo system. He also includes less traditional devices such as mobile phones, computers, and tablets. Transmission Here he includes electromagnetic radio waves that are modulated, wired internet connections, cell phone data streams and satellites. I would add audio channels on TV cable and satellite systems. Text By this Dubber means the programs offered through the medium. Subtext Here Dubber is speaking of the intentions behind the programming. What are the underlying purposes for making radio content? The motivation shapes the outcome. Audience This refers to the people who consume the content no matter how it gets to them. Station Dubber uses this term more broadly than the traditional idea of a business entity that broadcasts a radio signal over the air in a geographic location. He also includes any organization that produces texts (content). Political Economy Here he wants us to consider political and economic forces that shape the content shared and the funding mechanisms. Dubber also includes the ideas surrounding performance of some social or civic function. Production Technologies Tools used to create radio texts (content). Think hardware and software. Professional Practice In this area Dubber refers to techniques and work flows for using the technology to create and distribute the content. Promotional Culture This one relates to several of the others, but with the intention to have a specific effect on the consumer behavior of audiences. Whew! So... How do YOU define RADIO? Let's see what you think. Of course that assumes anyone is actually reading this and has/shares an opinion that I'm OK leaving posted here. ;-) |
Michael BeachGrew up in Berwick, PA then lived in a number of locations. My wife Michelle and I currently live in Georgia. I recently retired, but keep busy working our little farm, filling church assignments, and writing a dissertation as a PhD candidate at Virginia Tech. We have 6 children and a growing number of grandchildren. We love them all. Archives
October 2024
Categories
All
|