sts-6664_summer_2020_essay_m-beach.pdf |
za_thefutureoftelecomsinafricafinal_29042014.pdf |
0 Comments
De-centering the ‘Big Picture’: The Origins of Modern Science and the Modern Origins of Science By Adrew Cunningham and Perry Williams In this article, the authors look at historical approaches to science. They divide scientific histories into two sorts, those that seek a macro story, and those that focus on more personal experience. The macro version of history attempts to paint broad patterns. Patterns and major milestones dominate the depictions. The micro version of history tends to focus on individual scientist experience and resultant breakthroughs. The origins of modern science refers to the period of time in seventeenth century Europe commonly referred to as the scientific revolution. The authors note at the macro level historians in general consider areas of:
In the area of modern origins of science focus is more about a plurality of ways to know the world. The is an expression of a transition from considering nature as created by God, to an attempt to understand natural processes. When the authors speak to de-centering the big picture histories, they note how these histories are often confided to the last 250 years and are Europe and North-America-centric. As opposed to seeking only scientific process knowledge, Cunningham and Williams also stress other sorts of knowledge such as knowledge of fact, technical knowledge, relational knowledge and moral knowledge. In terms of de-centering, the article does not go on to speak to other scientific, or knowledge centers geographically such as Asia, Africa or South America. The authors also don’t speak to traditional local forms of knowledge such as non-western approaches to medicine. Cunningham and Williams depict science as an invention, or perhaps they might have better stated it as a convention. If this is true, then why only focus on one conventional approach?
The Newest History: Science and Technology By Melvin Kranzberg May 1962 In this article, Melvin Kranzberg argues for a new approach to history through the lens of science and technology. Old history is about politics and the state. Democratizing history adds society (the people) to history. A few of his arguments in favor of a focus on science and technology in history include:
The original article was published in 1962. As it turns out much of his predictions have panned out in that there are whole disciplines related to science and technology that is academically concerned with history, philosophy, sociology and policy for example. Despite that, his point about using history to make better decisions about modern employment of science and technology may be overstated. Most college graduates today completing a degree in a STEM field have likely not taken any courses in any of the liberal arts areas that focus on STEM areas. Despite the fact that most “soft science” programs consider it important for “hard science” majors to have some understanding of such topics, perhaps the hard science program directors are not yet sold on the idea.
The idea of progress as linked with the most recent version of the idea of technology implies change. It also implies that the change is supportive of the goals or preferences of whoever is designating the change as progress. In Modernity and Technology by Thomas J. Misa, the author argues that as some see modernity and technological advancement as progress, other philosophers see these ideas linked as a negative. Among his proposals the author states “Technology may be the truly distinctive feature of modernity” as proposal 2. Misa posits that those who argue for technological determinism of social norms (modernists), and those who prefer a focus on societal change independent of technology (post-modernists) are both thinking too macro. He argues, “To constructively confront technology and modernity, we must look more closely at individual technologies and inquire more carefully into social and cultural processes.” As Misa offers “proposals” in his article, likewise Melvin Kranzberg offers “laws” in his article Technology and History: “Kranzberg’s Laws”. His sixth law states, “Technology is a very human activity – and so is the history of technology.” In this section of the article Kranzberg argues “man the thinker” is also simultaneously “man the maker.” In fact, he is saying that what man the thinker is thinking about is what to make and how to make it. Like Misa, he questions the technological imperative. Although we often shape our lives around technology such as the clock or the automobile, “this does not necessarily mean that the ‘technological imperative’… necessarily directs all our thoughts and actions.” As Misa states that the concepts around technology should look more at the specifics, the micro instead of the macro, Kranzberg actually gives some specific examples. In speaking of “technical devices that would make life simpler or easier for us but which our social values and human sensibilities simply reject”, he shares how we, in America at least, do not accept the use of communal kitchens. “Our adherence to the concept of the home has made that technical solution unworkable,” he adds. Where some might take advantage of the shared benefit of a communal kitchen, including better equipment with pooled resources and less work in cleaning and maintaining through shared effort, American culture does not see the technical advantage as a form of progress. The Misa writing helps to see some linkages between various aspects of technology that are not so obvious. For example, under his proposal 4 comparing modernism and postmodernism he speaks to architecture as a technology. Modernists, he states, follow the idea that less is more, while postmodernists would argue less is bore. Another example of a strength is linking the concepts of reason and freedom. He shares both arguments of freedom through reason, and concern that it can lead to domination by reason, hence the opposite idea that reason usurps freedom. Similar examples through the work point to both the strength and weakness of the writing. Helping present multiple sides of the questions is helpful to arriving at a better understanding of the questions, but the author generally does not take a side. He frames the questions and shares the answers of others that disagree. He also generally only shares two sides to each of the posed questions. I am sure there are many more than two sides that could be understood.
Sustainable Design: Beyond the Innovation-Driven Business Model By Hartmut Esslinger IEEE Engineering Management Review, Vol. 41, No. 2, Second Quarter, June 2013 This article considers what the role for design engineers might play in ‘greening’ industry. Although I’m not sure I buy all the dire predictions about how quickly things will get in the immediate, yet I’m also all for being wise stewards of our planet. I hear all the messaging about scientific prediction and am reminded that it is just that, prediction. Such predictions are based on modeling using available data, and making probabilistic assumptions. That said, I’m with Esslinger on part of the approach he espouses. “Designers have a unique opportunity to drive the development of sustainable products by virtue of our role in the early stages of the product lifecycle process.” Esslinger assumes “some radical change” is required, but also advocates “evolving our industrial processes” by taking advantage of opportunities to “apply technologies, or products, or practices that are currently available, or can be easily adapted from existing models or practices.” The more evolutionary portion of his proposal seems more realistic. Radical change can easily bring unintended consequences, making some things better, and other things worse.
While at church a few weeks ago I listened to the sacrament talks. They were about freedom and agency. It so happens that I have also been reading some sociological theory. In particular I read some writings of Herbert Marcuse. He argues society is ‘higher’ with more freedom, but his notion of freedom is troubling. He says choice between limited options (socially constructed options) is really not freedom. He also says evidence of ‘higher’ culture is when more diverse forms of sexuality are public, and publicly accepted.
It seems to me like Satan always argues down this path. He cries ‘No boundaries’ and suggests when boundaries are in place they are motivated by power (slave/master) relationships. Yet as I have written in the past, good and evil can be best understood (maybe only understood) by comparison with each other. To understand right from wrong a boundary is necessary. Book of Mormon apostates inevitably refer to the gospel as a ‘foolish tradition’ or a tool for leaders to exercise power over others (see Alma 30 for example). Unfortunately there have been examples of power hungry religious and civic leaders throughout history willing to compromise ethics, but painting all leaders with this sort of brush is disingenuous at best. It’s another way Satan fights dirty. He inspires such action by some leaders then points to it as an excuse for other, just as bad, behavior. One of the arguments Marcuse uses is that total freedom to choose any option is a must. If options are somehow limited than one is not really free to choose, only to choose from limited options filtered, or narrowed, by someone with power. The options, goes his position, are intended to control behavior to keep or increase power for those in charge of the options. For example capitalists narrow options to increase profits under the guise of efficiency. Yet if there were an infinite number of purchasing options from any company then the business of providing a commodity is not sustainable. The result would likely be business collapse causing even the limited number of options to be lost. A friend of mine recently read a paper about ketchup. Some stores offered a large number of ketchup options assuming it would cause an increase in ketchup purchases. Instead the study found overall purchases decreased. Once the ketchup options were limited sales increased. The understanding was lowering options helped people to make selections. Religion, Marcuse argues, inhibits sexual choice in order to repress people through feelings of guilt. I think straying from Heavenly Father’s description of the law of chastity is less about who or how people love, and more about the effect on family and, by extension, society. For those of you who have some interest in history, I recently read an article about an early (mid-1800s) mechanical computer. It was envisioned by a fellow named Charles Babbage and was not based on binary, but rather decimal numbers. The first version, the Difference Engine, he was able to build in part and demonstrate. The later version was called the Analytical Engine. It could add, subtract, multiply, and divide. There are a bunch of YouTube videos on the ideas he had and one version of the machine that has been built, but the actual device was not constructed until about 130 years after he invented it. Some of his base ideas inspired later approaches into modern computers.
Bruno Latour and other proponents of Actor Network Theory (ANT) focus on interactions between and among actors (people) and actants (things) in a network intended to build knowledge. Emerging nodes and clusters, where interactivity is greatest, define where knowledge is extended. Thoughts of social context and varying goals in ANT are not considered important, or useful, in extending knowledge. Unfortunately, when difference is not examined some potential influences are missed, and knowledge is not extended everywhere, or as far as, it could. In her article, Modernity's Misleading Dream: Latour, Sandra Harding points to a defined need within ANT to externalize social thought. She indicates that Latour does acknowledge a need to link the philosophies of science with political science to succeed with his three-step process translating power to the lab. This is true because political power is a source of influence that can help in growing the influence of the ‘important’ actors in the network, meaning scientists. Making the border between the laboratory and the world permeable enough to be able to extend the lab and incorporate the field-site is a critical step that requires some translation of political power. Latour’s need for unity in purpose, a common world, blinds him to differences according to Harding. This matters in part because when there is a multiplicity of interests and beliefs, those interests spawn more criteria to help define success. Narrowing criteria may allow the definer of the criteria, the scientist, to claim success, while many others may see failure. This tension between definitions of success and failure risks future political support, or power, and ultimately weakens the scientific community, or at least the specific lab involved. Barbara Allen’s example of the Holy Cross neighborhood in New Orleans post Hurricane Katrina is a stark example. She examines rebuilding efforts in her study Neighborhood as 'Green Laboratory'. The interests of organizations of the green industry translated their goals onto residents who out of desperation, or perhaps through manipulation, were willing to shift their goals of rebuilding their homes and community into the language of environmental goals. In mapping Latour’s ANT model onto the circumstances of the Holy Cross rebuild, Allen shows how the goal of rebuilding homes using green technology, though laudable, only represented half of the goals of the local residents. Because success was defined in terms of homes built in the new way using green technology, community plans did not include economic infrastructure. This may, at least in part, explain why many homes continue vacant and not repaired. Other symptoms such as the reemergence of drug dealing, a lack of jobs, and no grocery stores in the district point to unintended consequences resultant from the narrowing of project goals too far. Turning a blind eye to some important social factors that were a part of the original community context helped to a certain point such as securing funds, materials and expertise, but an opportunity was lost to more significantly impact the community in positive way. In fact, some residents could argue they are worse off than before the project in that they now have a group of homes rather than a community like had existed before the hurricane. The ability of scientists, or any other group, to define desired outcomes from purely science-related or technology-related goals can make the group successful in its defined criteria. Unfortunately, like the generals who win battles and lose wars, by ignoring success criteria of other groups involved in a given project, science may miss as much knowledge as it gains. Worse, it may come to conclusions that are at least partially incorrect.
Throughout Pierre Bourdieu’s writing in Science of Science and Reflexivity he relates his ideas to a number of works by previous scholars. His major criticism of most is their focus on the microcosm as model for global themes. He argues that individuals and institutions within a field are shaped by the context of the field and the interaction between fields. He does nod to some of the other authors as well when their work relates to the idea of fields, though perhaps using different language then Bourdieu does.
Scientific capital for Bourdieu is symbolic capital such as scientific authority. Such capital leads to power within a given scientific field. Symbolic capital comes through both cognitive and communicative relations, generally within the field. It results from recognition by competitors who are referred to as agents. As competing agents attempt to discredit (like Karl Popper speaks to) and fail to, or find more evidence to support competing ideas, they in turn reference the work adding capital. Such capital only comes within the framework created within a field by the agents in that field who hold scientific authority (power). Like Robert Merton and Margaret Rossiter, he supports the idea that the more power/capital one has, the more one tends to gain. His perspective differs slightly in that having power (scientific authority) gives the scientist more control over economic, social and cultural resources allowing them to shape the rules of success within a given field. This also differs from Marx who links power purely to physical or economic capital. Similar to Rossiter’s ‘Matilda’ when some scientists find themselves with less capital they are more inclined to appeal to outside sources of capital, meaning from another field (political, economic, etc.). Bourdieu refers to this as Zhandovism. Like Bruno Latour, Bourdieu sees advancement (personal and of scientific knowledge) as a function of struggle. He sees the pattern of hybridization concepts expressed by Ben-David linked to the shifts in borders between fields. As rules or positioning changes within a field, the border between fields shifts as well. Players in the field (scientists) may ultimately shift fields if they see opportunity for more power in a related field rather than stay in their own. This effect also results from Zhandovism mentioned earlier. As the younger scientists look to advance in their field Bourdieu discusses two strategies each may choose to adopt. They may opt a succession strategy of gaining scientific capital by following the rules created by those in power within the field. They might be subversive by seeking to break the structure and create a new hierarchy. In either case it is the struggle itself (constantly challenging the existing hierarchy) that advances the individual, and also scientific knowledge. Bourdieu refers to structure within the field as creating a space of possibilities. By this he means there are differing ways to do science. The structure within the field which creates the space will be different from field to field, and is influenced by both individuals and institutions. Tension denotes difference within the field, and pressure is difference between fields. This is not unlike Latour’s concepts of competition over cooperation. Science, Technology and Society (STS) scholars, according to Bourdieu, should be less interested in the science of scientists, and more interested in the science of scientific knowledge. Noting this approach he argues that statics and dynamics are inseparable. Referring to a biblical phrase found in the New Testament book of Matthew, Robert K. Merton speaks to a halo effect on successful scientists, and a reciprocal barrier to scientific initiates. Margaret W. Rossiter argues Merton puts too much emphasis on the positive side of the equation, the haves, while neglecting an understanding of those who are often overlooked, the have-nots. The so-called Matthew effect describes a social and psychological base for a reward and communications system based on the biblical quote, "For unto every one that hath shall be given, and he shall have abundance; but from him that hath not shall be taken away even that which he hath." A kind of hierarchy forms in the scientific community. The worth of a scientific career is peer-adjudged based on metrics such as the quantity of publications, citations of one’s papers by others, and the value placed on the school or laboratory a scientist is associated with. In a sort of reciprocal measurement, works by scientists of rank are peer-adjudged higher based on the perceived rank of the author or co-author. The opposite is also true in that works of lesser known scientists that may be of equal, or even higher quality, as compared with works created by ranking scientists are overlooked by many involved in peer review. Merton points out that recognized scientists understand this happens so they often try to place others in a more prominent co-author position in a paper, or even leave out their own names altogether. They do this in order to help newer associates gain rank. Despite the good intention, it is often true that the lesser ranked co-authors are overlooked, and the ranked author acknowledged. Even when the ranked author chooses to not be listed as a co-author, when it is known that the others are associated with the scientist of rank, the halo effect still encourages peers to give credit to the well-known name because the others are known to be linked to them. Rossiter renamed the negative portion of the Matthew effect as the Matilda effect after Matilda Joslyn Gage. She did this because of the experience Gage had that reflects the effect. It is Rossiter’s contention that Merton spent too much of his explanation of the Matthew effect on ranked scientists, how the halo effect works, and how the haves attempt to help the have-nots. Rossiter prefers to speak to the negative impact on the have-nots, especially women contributors. Pointing to a number of historical examples in which women were either primary author, or a significant co-author and simultaneously ignored, Rossiter demonstrates how women have a double hurtle to overcome. Along with the barriers identified by Merton, women have the additional challenge of overcoming sexism. In fact, in several places in his paper Merton refers to the work of Harriet Zuckerman who created the data his paper is based on. Rossiter chides Merton for not identifying Zuckerman as a co-author which he later agreed he should have done. Rossiter also points out that Merton may have been making a supportive case of the Matthew effect as functional, and suggested lesser-known scientists might learn how to take advantage of the system. Rossiter does admit there are some women scientists who have been noted by peers as a ranking member of the scientific society, but she argues these to be exceptions. She also points out how the women of note had to achieve recognition by more overwhelming accomplishment to rise in the scientific annals than their male counterparts. The negative impact seems even higher on collaborating women when they are married to the ‘main’ (male) author. For example it can be noted that Zuckerman was a student of Merton and they eventually married. The Matthew statement taken from the Bible does not match in context with the Matthew effect based on the phrase. Despite that, the positive lift given to some, and the artificial barriers imposed on others, seem supported by the arguments of both Merton and Rossiter.
In the making of scientific knowledge Thomas Kuhn would say something seems true until something else seems truer. Karl Popper would say something seems true until it isn’t. Bruno Latour and Steve Woolgar would say something seems true until it doesn’t seem true In a number of publications, Kuhn explained the growth of scientific knowledge in the form a paradigm. A new way of explaining the physical world grows in popularity. It does so because the gist of the big idea better explains a particular set of conundrums than the previous big idea that had been accepted. The way a new paradigm becomes generally accepted happens as the previous paradigm that seemed to answer well enough, over time, doesn’t answer for all the questions scientists come up with on a given topic. This doesn’t happen right away. Scientists dedicate much effort and time into supporting the established paradigm. Eventually observations begin to raise questions that the established theories can’t answer. At some point some scientist or scientific group (usually newer, younger scientists less committed to the theories of the previous generation) begin to form new ideas to better satisfy the questions not answered by established science. The result is a paradigm shift, a new big theory, and the cycle repeats itself. In sharing this approach to changing scientific knowledge, Kuhn references Popper. The perspective of the referred to theorist purports the concept of the null-hypothesis. Popper argued that evidence leads to a theory. The theory inspires more experimentation and debate. Eventually the debate leads to attempts to disprove a theory experimentally in the face of growing supportive evidence. With the null-hypothesis approach a scientists looks for at least one way in which the accepted theory does not apply. Once a theory is not true in at least one case, then it is not true. Latour and Woolgar share works in which they review how some scientific ideas become accepted with or without supporting empirical data. They examine artifacts in the form of scientific journals. Theories gain popularity based on documented evidence (not necessarily proof) as written and published. Popularity of scientific ideas may have as much to do with how articles are written, or the reputation of the journal, authoring scientist, or institution an authoring scientist belongs to, than any actual evidence. There are even specific types of statements used in articles that make the shared ideas more or less likely to be successfully believed by scientific readers. It is entirely possible for a theory to be accepted or rejected by the bulk of the greater scientific society based on the way articles for and against are written. Latour and Woolgar refer to the approach of theory adoption by journal article creation as ‘literary inscription’. It seems scientists, like the rest of us, can be more or less convincing, and more or less convinced, based on subjective factors as much as supposedly objective data. All of these beliefs about how scientific knowledge changes bring into question if supposed ‘growth’ or ‘advancement’ are fit descriptors. Latour and Woolgar argue ‘fact’ and literary inscription may have congruence, but are not necessarily co-constructive. Their assessment clearly argues in favor of social factors as a guiding influence on what is accepted by the scientific community. Popper argues it is social factors that incentivize scientific torpedoing of theories. Kuhn supports the idea that social factors influence those scientists that adopt and support an established paradigm; an older generation more invested in the old paradigm. Likewise those who seek a new paradigm are influenced by social factors as well; a drive to be the new leaders of scientific industry thought.
Really this entry could be called Woolgar on MacKenzie on Yule vs. Pearson. Steve Woolgar wrote a critique of a paper published by Donald MacKenzie. MacKenzie's paper was titled Statistical Theory and Social Interests: A Case-Study. In part MacKenzie used an argument between to statistical theorists named George Udny Yule and Karl Pearson to explain how scientists are socially influenced both in their choice of study, their approach to the study, and the conclusions drawn in their study. Woolgar's paper was titled Interests and Explanation in the Social Study of Science. Just as histories may reflect the perspective of historians as much (or more) as they reflect reality, so to, argues Woolgar, is the case of ‘interests’ defining the scientific ‘acts’ of scientists. Woolgar uses MacKenzie’s review of the differing statistical theories of Pearce and Yule to make his points, though he eludes to an entire line of “Case Studies of Interests”. Woolgar shares a list of six very specific assumptions or approaches of analysts like MacKenzie who argue in favor of a causal relationship between the interests of a scientist and the eventual acts, or artifacts, produced by them. Within the assumptions of some case study reviewers there is a belief that scientific action is expressive of concomitant interests. Woolgar argues that in case studies, reviewers such as MacKenzie inevitably make the point that the scientific actions are expressive of these concomitant interests. He (Woolgar) explains that these coexisting phenomena do not necessarily make either causal of the other. He further argues that the interests derived by scientific actions are more likely derived by the interpretation of the analysts themselves. As an example, Woolgar shows how MacKenzie attributes an interest in furthering the adoption of statistical theories of correlation and regression to the motivating factor of Pearson’s development of the rT and C concepts. Later MacKenzie adds a focus on analogy as a motivating factor (interest) of Pearson. Woolgar essentially asks if piling on interests (motivations) is a way for MacKenzie to give credence to his interests-cause-actions argument. Woolgar shows how those attempting to show a cause-and-effect relationship between interests and acts must first adopt a belief that the interest and the act are independent of each other. He then shows that one could argue as easily that the act brings about the interest as the other way around. This line of thinking risks a certain circularity of thought that questions the linear argument of analysts like MacKenzie. Woolgar also points out that neither interest nor act may be causal of the other. The idea that an analyst is completely objective in finding the route-cause interests is another of the suppositions Woolgar questions. MacKenzie argues independence of the interests from the acts to be able to assert one causing the other. By documenting a series of acts that form a pattern, the analyst is able to discover the route interest, or so goes the line of thinking. Woolgar argues that any number of potential motivations could explain a pattern of actions so it would be difficult at best to discover the specific motivation(s) if not explicitly documented by the scientist being studied. The person analyzing potential interests are themselves influenced by their own interests as they attempt to pare down the list of candidate motivations. If Woolgar takes issue with interests leading to acts, he accidentally supports the view by using his argument about analysts‘ interests leading to their acts (their analyses). He himself is attempting to discover motivation of the analysts of motivation. Another weak point of Woolgar’s perspective is in his criticism of MacKenzie’s generalization of supporting documentation by Pearson and Yule by simply stating they each supply more information about their interests in other documents. Woolgar argues that by generalizing these other works, and not giving any idea what the other works are, or even how many of them exist, MacKenzie is attempting to bolster his argument without actually giving evidence.
If necessity is the mother of invention, then Boris Hessen stretches the proverb to say that economic goals are the mother of necessity, which is the mother of invention, which is the mother of basic science. His argument stems from a few practical examples. It is juxtaposed to other beliefs that once some basic scientific discovery is made, then some specific applications are invented from the new knowledge, which are later put to economic use. Hessen support the opposite view. As the industrial revolution encouraged increased division of labor, factory owners were looking for ways to increase productivity. Hessen’s specific example was for a cloth weaving spinning jenny. The early models were powered by hand, then by water. Both methods had serious limitations. The goal was to allow the device to be operated “without fingers.” The answer seemed to be in the steam engine which was originally designed for work in the mining industry. The hope was to make it such that a steam engine could be more generically applied to other industrial uses as a “universal motor.” The engine proved practical, but another issue arose in the increased need (read increased cost) for large amounts of fuel to heat water into steam sufficient to power the weaving factories ‘round-the-clock. This is yet another economic issue that required thought. Enter Nicolas Carnot looking to improved efficiency of the steam engines to increase capacity or lower required fuel. He asked the basic question whether power from steam heat was unbounded. He wanted to know if it were possible to generate steam power with no upper limit at a higher rate than the additional fuel used to increase the required heat. In his approaches to determine the “coefficient of profitable activity” he also managed to establish the foundation of a new scientific discipline within physics known as thermodynamics. Based on Carnot’s efforts other scientists (e.g. Kelvin and Clausius) were eventually able to define the second law of thermodynamics. In this example Hessen depicts these events as a trajectory from a set of economic goals, to employment of an invention, to discovery of a new form of science. Hessen also points to another way the invention of the steam engine encouraged basic science. Every mechanical invention needs a motive power, a transmission mechanism of that power, and an executing instrument driven by the transmission mechanism. The study of the forms of motion of, and efficiency in, the steam engine led to more general studies of the motion of matter. Hessen specifically points to mechanics, heat, and eventually electricity. Similar to the eventuality of thermodynamics, study in each of these areas for practical application likewise generated new fields of investigation in basic science. As each of these more and more specific areas of science developed, Hessen draws a correlation to Marxist principles of classification. He points to Friedrich Engels’ conception of “interconnection” and “hierarchy” of the movements of matter as symbolized in the order of various study disciplines within science being both interconnected and forming a hierarchy of social arrangement. Engels provided theories of conservation and conversion of energy based on a “materialistic conception of nature” akin to ideas espoused by Marx and Lenin. Hessen argues these Marxist ideas lead to an understanding of the “historical succession” of the development of the associated sciences of motion. The succession being yet another trajectory from economic goal, to practical invention, to scientific definition.
Here is the second of two final exam papers written for the History of Technology class last semester (Fall 2018). Try not to doze off. I'm dumb enough to have started another degree program. This time I'm attending Virginia Tech. My son Matt expressed an interest in reading whatever feeble work I manage to produce along the way so here is a first installment. I actually finished this class before the Christmas holiday so I'm uploading this after the fact. Try not to fall asleep. The idea of progress as linked with the most recent version of the idea of technology implies change. It also implies that the change is supportive of the goals or preferences of whoever is designating the change as progress. In Modernity and Technology by Thomas J. Misa, the author argues that as some see modernity and technological advancement as progress, other philosophers see these ideas linked as a negative. Among his proposals the author states “Technology may be the truly distinctive feature of modernity” as proposal 2. Misa posits that those who argue for technological determinism of social norms (modernists), and those who prefer a focus on societal change independent of technology (post-modernists) are both thinking too macro. He argues, “To constructively confront technology and modernity, we must look more closely at individual technologies and inquire more carefully into social and cultural processes.”
As Misa offers “proposals” in his article, likewise Melvin Kranzberg offers “laws” in his article Technology and History: “Kranzberg’s Laws”. His sixth law states, “Technology is a very human activity – and so is the history of technology.” In this section of the article Kranzberg argues “man the thinker” is also simultaneously “man the maker.” In fact he is saying that what man the thinker is thinking about is what to make and how to make it. Like Misa, he questions the technological imperative. Although we often shape our lives around technology such as the clock or the automobile, “this does not necessarily mean that the ‘technological imperative’… necessarily directs all our thoughts and actions.” As Misa states that the concepts around technology should look more at the specifics, the micro instead of the macro, Kranzberg actually gives some specific examples. In speaking of “technical devices that would make life simpler or easier for us but which our social values and human sensibilities simply reject”, he shares how we, in America at least, do not accept the use of communal kitchens. “Our adherence to the concept of the home has made that technical solution unworkable,” he adds. Where some might take advantage of the shared benefit of a communal kitchen, including better equipment with pooled resources and less work in cleaning and maintaining through shared effort, American culture does not see the technical advantage as a form of progress. The Misa writing helps to see some linkages between various aspects of technology that are not so obvious. For example under his proposal 4 comparing modernism and postmodernism he speaks to architecture as a technology. Modernists, he states, follow the idea that less is more, while postmodernists would argue less is bore. Another example of a strength is linking the concepts of reason and freedom. He shares both arguments of freedom through reason, and concern that it can lead to domination by reason, hence the opposite idea that reason usurps freedom. Similar examples through the work point to both the strength and weakness of the writing. Helping present multiple sides of the questions is helpful to arriving at a better understanding of the questions, but the author generally does not take a side. He frames the questions and shares the answers of others that disagree. He also generally only shares two sides to each of the posed questions. I am sure there are many more than two sides that could be understood. This post was originally published in March of 2017 on another platform:
The September 14, 2016 edition of RadioWorld posted an interesting interview with Ray Sokola. He is a VP at DTS. That's the company that not too long ago bought Ibiquity. You may already know that Ibiquity is the owner of HD Radio technology. The focus of the interview is on "hybrid radio". That's the phrase gaining traction these days when referring to integration of broadcast radio content with online-delivered content. NPR Distribution has been contributing to the hybrid radio industry effort through a service we call MetaPub. When Sokola was asked to describe hybrid radio he said it is, "the connection of traditional radio with the internet. This expands the listening experience to take advantage of the best of the past, present and future capabilities that cellular connectivity, the internet, streaming and apps have added to the traditional radio experience. The basic examples start with providing album art and easy purchase capability to a radio experience, but it goes way beyond that and is only limited by our imagination." With MetaPub we've started with text, graphics and links, but we assume public radio stations and producers will figure out more ways to use metadata over time. Sokola seems to be thinking the same way. "Hybrid radio is a platform for innovation that can be taken anywhere by creating the right connection between the radio, the internet, the rest of the vehicle, the auto manufacturer and the consumer. That, I think, will evolve in many ways." Encouraging broadcasters to catch up Sokola said, "Radio is the only consumer medium still not fully digital. Consumers have come to expect that all their audio and video entertainment sources will have added features and digital quality. If a radio station can't offer Artist Experience visuals for album art, station logo and advertiser value-added, they are last century's medium in the eyes of today's sophisticated consumer." Not noted in this article is that DTS recently purchased Arctic Palm. That company/product is one of the middleware tools some of our stations are using to interact with MetaPub. For the full article go here: http://www.radioworld.com/article/dts-seeks-to-immerse-you-in-the-soundfield/279674 We at NPR Distribution have been getting noticed for our MetaPub efforts. For example: MetaPub participation in California Shakeout made the front page. http://www.radioworld.com/article/metadata-test-is-part-of-quake-drill/280108 This post was originally published in March of 2017 on another platform:
One of the selling points of the NPR One app is that it follows what you listen to, then makes suggestions about other things you might also be interested in based on your tastes. It learns your tastes by noting what you listen to and what you don't listen to (skip). This pattern of recommending may sound familiar. If you've ever ordered something from Amazon you will recognize the suggested list that says something like "other people who ordered what you did have also ordered these…" Even more recently I noticed that Amazon noticed what I looked at but didn't order. After logging on I got a note that said something like "based on your recent searches you might be interested in some of these related items." Here's yet another story in IEEE Spectrum of how Spotify is jumping on the curation-suggestion-individualization bandwagon: http://spectrum.ieee.org/view-from-the-valley/computing/software/the-little-hack-that-could-the-story-of-spotifys-discover-weekly-recommendation-engine In this case, the idea/project was started by some engineers within Spotify. The recommendation tool at first didn't take off. One of the creators shared, "My hunch was that navigating to this page and looking at albums was too much work." The original tool required customers to go and check out the suggested content. Gradually they developed the more proactive tool. The article shares, "Their system looks at what the user is already listening to, and then finds connections between those songs and artists, and other songs and artists, crawling through user activity logs, playlists of other users, general news from around the web, and spectragrams of audio. It then filters the recommendations to eliminate music the user has already heard, and sends the individualized playlist to the user." Without telling people, they pushed out the feature to Spotify employees. Reaction was positive. As the tool become popular internally, Spotify decided to put it into the production system for customers. Whether you think this sort of thing is helpful or creepy, it's clear that companies believe it adds value. I'm not sure there is a place for this particular idea in all applications, but what I find interesting is that the idea came from someone seeing a need and a solution without waiting for "management" to point them down a path. From the article, "'This wasn't a big company initiative,' Newett says, 'just a team of passionate engineers who went about solving a problem we saw with the technology we had.'" This post was originally published in August of 2016 on another platform:
The latest book I just started reading is The Meaning of Science; An Introduction to the Philosophy of Science by Tim Lewens. When I say I just started reading it, that's what I mean. I'm only a handful of pages into it. So far it's interesting. This morning I came across an article in the June/July 2016 issue of PM Journal. The author of Philosophy of Project Management: Lessons From the Philosophy of Science attempts an interesting comparison between the disciplines of science and project management. His name is J. Davidson Frame and he frames (cough cough) scientific philosophy into the following areas: General epistemological issues
Demarcate the discipline
Here is the full article: science_and_pm.pdf This post was originally published in August of 2016 on another platform:
In an article posted in IEEE Spectrum on May 27, 2016, Mark Anderson reports on a recent court case where Oracle accused Google of copyright infringement. Google has used Oracle-published Java API's in creating the Android OS and allowed developers in the Android ecosystem to create apps using the OS. Oracle says it will appeal. That remains to be seen. From the article: "The jury's verdict, so long as it withstands what Oracle said on Thursday would be an appeal, arguably opens the door further for developers to enjoy protected use of other companies' APIs. And that, says one leading software copyright expert, is good news for creative software developers and for users of the millions of apps, programs, and interfaces they create." As a tech user I've never been much of a Java fan. My beef was with the waves of Java updates that seemed at times to be daily. Interacting with more than one machine made it worse as each machine would give me the Java-needs-an-update message. I have noticed these messages have been fewer lately. That may be because more and more software systems are dumping Java. I don't know. Since I don't use a 'Droid phone I'm not sure how much of an issue this is, but obviously it has been an issue enough to cause the court battle. Google rubbed a little salt in Oracle's wound during the closing argument by bringing up Oracle's failed attempt at creating a mobile device OS of its own: "The closing argument was one in which the lawyer for Google was able to say: 'Look, they tried to make a phone with Java, but they failed,' Samuelson says. 'We did so, but we put five years' worth of effort into developing this wonderful platform that in fact has become this huge ecosystem that Java developers all over the world have been able get more of their stuff on because of this. Essentially, [Oracle's] argument is sour grapes.'" Though at my work we are no Google, we have had our own negative interactions lately with Oracle. I'm not sure what Oracle's business plan looks like, but I'm not buying stock. Here is the full article: oracle_v_google.pdf This post was originally published in August of 2016 on another platform:
From the June 2016 issue of PM Network there is a short entry about Microsoft placing server farms on the seabed in California. Not necessarily a philosophic topic, but I found the idea… well… cool (cough, cough). Here is the entire text: Data’s Deep Dive The technology industry has a heat problem. Massive data centers help deliver videos, email and social network content to billions of people – and generate tons of heat. This leaves tech companies with massive air conditioning bills and the constant risk of crashes from overheated servers. Microsoft thinks the solution lies at the bottom of the sea. Earlier this year, the Redmond, Washington, USA-based company concluded a 105-day trial of an underwater data center project. A team plunged a server rack encapsulated in a watertight steel cylinder 30 feet (9.1 meters) underwater off the coast of California. The capsule was outfitted with more than 100 sensors to measure pressure, humidity, motion and other conditions. The ocean water keeps the servers cool, eliminating expensive energy bills and reducing the risk of crashes. Subsea data centers might even be able to power themselves using tidal power or underwater turbines. The challenge is creating units that can function without regular checkups. Microsoft estimates that an undersea system may be able to go up to 20 years at a time without maintenance. To alleviate environmental concerns, the project team used acoustic sensors to determine if noise from the servers would disrupt ocean wildlife – and found that any sound from the system was drowned out by the clicking of nearby shrimp. Early tests also showed that heat generated by the servers only affected water a few inches around the vessel. The project’s test phase was so successful that it ran 75 days longer than planned. Researchers believe that mass-producing server capsules can slash setup time of new data centers from two years to 90 days. If that’s the case, a big new wave of data center projects could be on the way. Kelsey O’Conner Data Center loaded on ship Pic from NY Times This post was originally published in August of 2016 on another platform: An interesting focus paper was recently published by Radio World. The topic is Audio over Internet Protocol (AoIP) and is titled Radio AoIP 2016. Each piece in the focus paper reviews some aspect of the AES67 and AES70 standards. AES is the Audio Engineering Society. The AES has created many standards for the audio industry over the years. AES67 is intended to be an interoperability standard such that if audio is shared between two pieces of equipment over an IP network, and both pieces of equipment use this standard, then the audio should transfer even if the equipment comes from different manufacturers. AES70 is a standard for monitor and control of IP networked audio equipment. As it turns out, despite what this document encourages, organizations like us at NPR Distribution and public radio stations are not really able to be 100% on the AES67 standard. Why? Because not all the manufacturers of the equipment we use have adopted it. Some that have adopted it have made unique adjustments in the way they deploy the standard in their equipment. They likely take this route to encourage engineers to use their gear and not mix-and-match with other manufacturers (their competitors). This seems counterproductive to me. Often the members of these standards committees within AES come from the manufacturers themselves. If they are dedicating some of the time (meaning money) of their senior engineers to create these standards then limiting full compatibility in some way would make the time and energy less helpful. Maybe they do it so they can market the fact that they have the specific AES standard available to purchasers. Maybe it's so they can get a look at how their competitors are approaching some of the same topics as they are. In either case it may be a bit of a Potemkin village if in the end only some adopt and others adopt in a slightly non-compliant way. Some manufacturers claim to be fully compliant and only put their unique spin into it using optional sections of the standard. If that is true then their gear would work (and perhaps does) with other fully compliant equipment. In these cases the vendor can rightfully claim to be offering "enhancements" in their application of the standard. Perhaps they are marketing their gear as AES67 compliant knowing that other manufacturers will not adopt so they can put the blame on the others when it doesn't work. If this perspective is true, then saying gear is compliant is for marketing purposes knowing that a full system is not likely to happen unless an organization like us uses all the components from the single vendor. It may be that eventually all manufacturers will become compliant and we can move from the older standards we use to the newest. At the same time it may also be that by the time all the manufacturers catch up to AES67 that a newer and better standard will come along, and the cycle would start all over again. You can see why our engineers have their work cut out for them trying to keep us up to the latest standards possible while not always having the full cooperation of the equipment manufacturers. This is just one of the many challenges to our engineers as they are planning what our system will look like during our next major roll-out beginning in FY2018. Here is the full focus paper: aoip.pdf This post was originally published in August of 2016 on another platform:
There is an interesting article in June's Satellite Today. The piece is titled Generation Next: Pablo Martin, Hispasat and is focused on what attracted this person to the satellite communications industry. The text describes, "His main responsibility is to design 'performing, creative and cost-effective' connectivity solutions." Seems like we at NPR Distribution have a similar focus. Martin says he is interested in satellite because it crosses boundaries and helps solve complex communications needs. He puts it this way, "These types of problems present, in some cases, one common factor: the presence of inequalities. Satellites, due to their nature, can help to provide a solution thanks to their equalizing effect. They don't distinguish political or geographical boundaries: they are reliable and they provide an immediate and effective solution." One of the factors keeping us at NPR Distribution as a satellite-centric network is the diversity of locations we serve and the inequalities of terrestrial bandwidth availability. Our research has shown that although the cost of terrestrial dedicated connections is coming down, at the large scale we need, it is still more expensive than satellite. There are places in the U.S. where you can't get dedicated terrestrial bandwidth, or it is so expensive as to make it impractical. We do believe that availability and cost for terrestrial networks may someday be more appropriate for our application, but not now. Thanks to our work on the PBS proof-of-concept project, we know it is possible to have a single receiver that can work in both satellite and terrestrial networks. In fact, for our future interconnect system this will be a basic technical requirement. It's important so that whichever network topology makes the most sense down the line, we will have the technology in place that is flexible enough to serve in either scenario. Towards the end of the article Martin describes a future with more ubiquitous hybrid networks using both satellite and terrestrial. We are doing that now. For example most of the national producers send live content to us over dedicated terrestrial networks and file content over the Internet. Unlike the limitation of our competition to only files and only Internet, we distribute live and file over satellite, Internet, and dedicated terrestrial networks. Here is the full article: http://interactive.satellitetoday.com/via/june-2016/generation-next-pablo-martin-hispasat/ This post was originally published in July of 2016 on another platform:
I was thumbing through the March 2016 edition of Via Satellite when I came across another article on the connected car. The article is titled Driving in the Fast Lane: How the Connected Car is Becoming a Must Have. Since this is a magazine focused on the satellite industry there is, of course, a section on roles for satellite bandwidth in this product market. Other than using the satellite to deliver our content (now metadata as well as audio), I'm not sure how much that portion of the article applies to my work at NPR. I found one idea surprising, as it was to the author. In the portion of the write up under the heading Where the Market Goes Next, there are some assertions I have heard anecdotally a few times in the past. This section seems to put actual data behind the ideas. The author references a recent study by Accenture that surveyed 15,000 new car buyers. According to the study 39% of respondents say that in-vehicle technology is the top priority when selecting a car. Only 14% said power and speed (engine and horsepower) was most important. In fact in-vehicle technology ranked three times higher than power and handling. I don't care what drives a person to buy a specific kind of car (pun intended). What jumps out in front of me (sorry about that one) is that if radio broadcasters want to lower risk in the future, and if drive-time is the most critical time for radio revenues, then broadcasters should do everything they can to attract smart-dashboard use of their content. MetaPub is one way for NPR Distribution to help that effort for public radio stations. It's a new service we are in beta test with right now. I doubt MetaPub will be a big revenue generation machine, just as I doubt the emergence of the smart-dashboard is the saving grace for radio broadcaster revenue. At the same time, like the lotto, you can't win if you don't play. If we are not supporting the new technologies, someone else will. With MetaPub, so far, we seem to be ahead of our competition. If broadcasters are not adding value in the fight for dash-screen real estate, someone else (Pandora, iTunes, Stitcher, etc.) will. Here is the full article: http://interactive.satellitetoday.com/via/march-2016/driving-in-the-fast-lane-how-the-connected-car-is-becoming-a-must-have/ |
Michael BeachGrew up in Berwick, PA then lived in a number of locations. My wife Michelle and I currently live in Georgia. I recently retired, but keep busy working our little farm, filling church assignments, and writing a dissertation as a PhD candidate at Virginia Tech. We have 6 children and a growing number of grandchildren. We love them all. Archives
January 2025
Categories
All
|