Monday, April 30, 2018

Old and New Problems with Behaviorism in EdTech

About 100 years ago, research in Education (like formal, disciplinary "Research") was relatively new, and there was a rift between a philosophical approach to education research on the one hand (e.g., John Dewey), and a scientific approach on the other (e.g., Seymour Pressey and Edward Thorndike) (Lagemann, 2000).

Actually, before that, if you go back another 50 years or so to the mid-1800’s,  public education in the U.S. was spreading like wildfire.  Things like mandatory k-12 education and the Morrill Act which provided land grants helped spread education to the masses (well, white people that is).

The rise in public education led to a teacher shortage that led to more women teaching overtaking men as the predominant gender (this demographic shift persists today with 80% of k-12 teachers being white women- higher ed becomes more male dominated).  Some of these women started doing scholarly research, emphasizing the nurturing and supportive aspects of education.  

In the 1900’s, industrialization was changing every aspect of our society.  New methods of scientific experiments were being applied to all kinds of things including the management of people and organizations. (Think Taylorism, right?)

Folks like Pressey and Thorndike started creating machines for their scientific education research.

These "education scientists" basically told the women: “thanks, we’ll take it from here” and kind of patted them on the head.  Men and their scientific methods began dominating education research (at colleges and universities), while women continued to teach in the lower grades without conducting research (Lagemann, 2000).


The early behaviorists believed that people were like animals or even physical objects and that everything could be explained by natural laws of cause and effect.  They also believed that observable things like memory recall were all that mattered in education.

This is in contrast to Dewey, who talked about education as “democratic” and as a shared social experience.  "Education is a process of living" he once quipped, "not a preparation for future living" emphasizing the experiential or intrinsic value of education rather than the pre-occupation with "results" or "outcomes."

Meanwhile, education policy makers were intrigued by these new methods of punishment and reward and how things like propaganda could be used to indoctrinate youth and instill a sense of national pride through education.

But, as Dewey might suggest, education should respect peoples’ freedom.  Sharing different viewpoints against a backdrop of “traditional” knowledge should be a function of education.

Punishment, reward, and memorization – that’s how we train parrots.  And people aren’t like parrots.  Right? Well, for B.F. Skinner, people were like pidgeons. Actually, he used rats at first, but pidgeons lived longer.  Skinner thought his behaviorist predecessors got some things wrong:  It’s wasn’t just about automatic responses in individuals (like dogs salivating when a bell rings), but there were environmental factors as well—what Skinner called “operant conditioning.”  If we could pinpoint just the relevant environmental factors (and Skinner thought there weren’t that many) we could understand (and control) all human experience. Skinner also believed that freedom was a myth and that thoughtful self-reflection was useless.

The problem with behaviorism is that it works.  At least on some level.  Sure, people respond to stimuli.  We eat when we’re hungry.  We feel sad or happy when things are intended to make us feel sad or happy. The problem is that there are way way too many variables at play in very complicated human experiences. And when data and design is used to manipulate and control these superficial behaviors, it can be objectifying. Now you might say behaviorist tools in math learning or game theory is useful or even fun. 

Sure.  But.

This can be a slippery slope.  Design always contains bias, and to acknowledge that bias, the tools and systems should be totally transparent about what they are doing and how they work.

Predictive systems are only as good as the things they measure- and if they measure things like memorization or if they promote things like superficial behaviors (like “nudging or prodding”) then maybe they aren’t supporting good education.

Behaviorism is present in software design because it assumes the brain is like a computer or a machine.  It isn’t.

Software designers should focus on developing tools that reflect a different set of values like creativity, empathy and collaboration.  And there examples of games and software that do this.  Games and "gamification" might also suggest social, collaborative, and creative benefits of the activities. 

Behaviorism implies that it’s OK for people to be manipulated and controlled.  And behaviorist technology sometimes hides the people, powers, and ideologies behind them. 

We know all too well that when software is used for manipulation, it can cause problems.  In an age of fake news, advertising algorithms, and privacy violations, we need education researchers and educational technologists to not only consider these problems but also to face them head on. 

Lagemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago: University of Chicago Press.

Friday, August 11, 2017

How to engage in racial discourse when you’re white

First, rather than posit an authoritative stance on any sort of empirical truth about race and identity issues: don’t. Instead, listen and learn (primarily to people of color; not white males; I realize the hypocrisy of that statement and intend for this piece to be a starting point- an overview of things I have learned from people of color, with plenty of links for further learning). Amplify marginalized voices. But do not take credit for them or pat yourself on the back for being an ally - if you think about it, that is reinforcing a position of privilege and dominance, as it attempts to redirect the attention to a particular identity that is already hegemonic. I also understand that my views are imperfect. I will own the problematics of my arguments, and I will own my continual growth and understanding. That is my responsibility.
What I wish to confront is the discomfort associated with thinking deeply and frequently about these topics; they are messy and emotionally-charged-- but for some (namely, people whose identity is physically represented to the world as a person of color), the discomfort is ubiquitously present in their everyday experience as they are faced with constant reminders of how the world has been shaped around race. I also wish to avoid a dismissive attitude, as was described by an African-American college student who felt shut down when attempting to discuss race with both fellow students and professors who were white.
Race is both structural (i.e., woven into the institutions, policies, and practices that influence social behavior) and subjectively individual; especially with regard to the latter, it is impossible for a white person, especially a white male to truly empathize with the complex experience of others (I realize that even saying this is dangerously close to positioning “white male” as normal or default - that’s another pitfall to watch out for; see how it reinforces the dominance of a particular identity?).
Educate yourself about history not as some fixed series of separate events but as a continuum - a way of perpetuating and unraveling dominant ideologies. Try to truly understand privilege not in a personal sense, but in a historical and structural sense.
Understand the myth of meritocracy and the real phenomenon of unconscious bias if you find yourself trying to defend arguments that we live in a “post-racial” society or that access to opportunities and jobs, etc. are solely based on individual skills.
Ask why the frequency of missing children of color is disproportionately high and under-reported by media? Especially among Native American populations.
If you prefer to look at facts, there are plenty of rigorous studies that verify systemic racism; be careful not to cherry-pick data that is presented in a misleading or biased way if your goal is to challenge racist claims (and ask yourself why you think it is important for you to adopt a “devil’s advocate” position in the first place).
Defending things like free speech and open exchange of ideas is important, but recognize that these things are inherently influenced by ideological hegemonies which can be barely perceptible at times but which prevent speech from being truly free (free from false assumptions and values that are historically and socially reinforced), and that the recognition of such ideological hegemonies leads to the demand for things that may be seen as controversial in discussions of free speech, like safe spaces or trigger warnings. Just as society is not post-racial, in order to be “free,” free speech must be predicated on formats and language systems that are not infiltrated with hidden bias and taken-for-granted assumptions that were created and reinforced by dominant groups (often in the interest of self-interest). Interrogating the assumptions that preclude true freedom of speech or open dialogue is a project that should be taken up by white people; and it starts with listening and learning.

Other links to check out:
When you forget to whistle Vivaldi (and anything by Tressie McMillan Cottom):
Deray McKesson on Twitter
Indian Country Media Network
So many more. . . what resources or people do you follow?

Monday, July 24, 2017

Is innovation necessarily good?

Proponents of innovation will suggest that failing fast, rapid prototyping, and a bias toward action are good things. However, presented as ends in themselves, these qualities do not speak to any broader aims or ethics.
Tom Kelley asserts that rapid prototyping, generating as many ideas as possible, and solving observable problems and inefficiencies are effective ways to generate ideas which may lead to successful innovation. And Clayton Christensen’s popular “disruptive innovation” theory has also led to much discussion and attempted application in both public and private sectors, further fortifying the notion of “innovation” as a desirable activity. Another factor that has led to the popularity of innovation as a concept is the growth of agile development in project management methodologies and design based research in public and private enterprises.
The term innovation is derived from Latin words for newness or renewal. In pre-renaissance Europe, calling someone an “innovator” was actually a form of an insult as it suggested someone was associated with heretical radicalism. The term gained popularity as a desirable quality during the industrial revolution as it increasingly referred to technical innovation. Analyzing the frequency of the term “innovation” in books and literature (using Google’s corpus analysis tool “Ngram”) reveals that the term remained relatively static in the early part of the 20th century until post World War II when it saw a sharp increase of frequency in books and literature.
The adjective form of the word, “innovative” was barely present in the literature until it also rose rapidly in the mid to late 20th Century.
The rise in the interest in innovation could be related to war and conflict. World War II introduced the atomic bomb and the beginning of nuclear proliferation. This led to a global scramble to produce advancements in science that might lead to more sophisticated weaponry. The competitive spirit was not driven by altruistic motivations of human progress, but instead was inspired by weaponry, war, and paranoia. In fact, the early infrastructure of the Internet was rumored to have been created as a defensive maneuver against communist threats. According to some, the United States needed to have a decentralized communication infrastructure in case an adversarial power (such as Cuba) was able to destroy (via missile) the central mainframe of our military communication network. This led to the creation of ARPANET, the primitive infrastructure that was foundational to the modern Internet. So one could argue the cold war and global conflict contributed to innovation as a public good.

The invention of the Internet and the World Wide Web may be a key contributor that helped lead to the rise in popularity of innovation. The Internet has in some ways democratized the means of content production, allowing both amateurs and professionals alike effectively disseminate products, services, and ideas, reaching global audiences in real time. The ability for anyone to advertise, sell, and buy goods and services as well as to communicate across the globe is unprecedented in human history. This capacity for distribution also introduces new threats to incumbent industries. New models introduce new competition. As file sharing epitomized the peer-based democracy of the nascent web, so too did it reveal the capacity of large corporate coalitions (of for example, the film, music, gaming, and software industries) to strike back in the form of litigation, propaganda, and eventually, new business models. Today, new tensions play out as net neutrality is perennially challenged by dying cable giants, monopoly ISP's and sprawling video platforms.
Robertson classified innovation as being continuous or discontinuous. By continuous Robertson means the degree to which an innovation may build upon existing conventions. This is related to Rogers’ notion of “compatibility” which he describes as in terms of how an innovation is consistent with existing values, experiences, and needs of adopters. Most innovations could arguably fall into this "continuous" category. Zizek describes the difference between Ptolemaic change and Copernican revolution where Ptolemists attempted to retrofit new discoveries into their incorrect model of the universe. Revolutionary change requires a more drastic abandonment of "existing values." The question is whether innovations are considered holistically in terms of ethical impact, potentially at a global scale. For instance, Apple's iPod could be regarded as innovative, but if it exacerbated exploitative practices related to precious metal mining, then the iPod could also be considered destructive or harmful.
One criticism of Robertson’s and Christensen’s models for innovation is that they are framed in market and economic contexts. Profit is de facto good. Satisfied customers is a de facto good. It is unclear the degree to which these theories may be generalizable to sociological or sociocultural frameworks, especially when considering broader ethical implications. Constant change and novelty may not be a marker of progress. From a techno-critical perspective, there may be consequences of constant innovation related to the environment, democratic participation, bureaucratization, globalization, mental health, etc. that are not researched critically enough due to a positive bias for innovation. Appliance developers may not consider the impact that manufacturing has on overseas labor practices or precious metal mining. Technical advancements always have ethical ramifications.
Lastly, one must question whether innovation is a real phenomenon, or whether all new inventions are in fact incrementally derivative. For example, the printing press is regarded as one of the most significant innovations in the past millennium, yet it is actually an appropriative combination of existing inventions: namely, moveable type and the cider press, or wine press (makes one wonder where divine inspiration came from). To what extent is a derivative combination of inventions considered an innovation as opposed to appropriated or simply modified? Innovation research is still in its infancy. There are a multitude of topics and questions that can advance our understanding of innovation beyond simply how to do it more effectively.

Wednesday, July 19, 2017

My reading list

I have several stacks of books on my desk right now, organized by category. They are:

  1.  Books about discourse analysis (including linguistic analysis and critical discourse analysis)
  2.  Critical theory (Habarmas, Horkheimer, etc.) and some of the philosophy that underpins it (Hegel, Marx, some phenomenologists) and books that are critical of scientism (I'm including Popper in this category)
  3. Books about the purpose and aims of higher education
  4. Books about instructional technology/educational technology (especially its history, i.e., Saettler, etc.)
Next to the desk is the pile of books that are not as germane to my research, but are tangentially related. They include some of the more abstract post-modernism (Baudrillard, for example), and books about pedagogical approaches that are more grounded in "traditional" psychology. Douglas Hofstadter is in that pile. Sartre is in that pile. 

Sometimes I read a few pages, then go watch TV for the rest of the night and feel guilty.