Home Music Projects About

Sensory Substitution and the Human-Machine Interface; Bach-y-Rita 2003

In one of the Georgia Tech Ubicomp Lab meetings Dr. Abowd claimed that if you're writing an academic paper, you want to either be the first or the last written about a particular subject. This paper is one of the first influential reviews written about sensory substitution, and therefore holds a lot of influence over how the field has developed over the past 15 years. Sensory substitution is a field of human-computer interaction that is focused on the plasticity, or ability to adapt, of the brain. For those that are born without a sense or have one forced from them, information about the surrounding world or actuation to it can be lost. The blind cannot use light to perceive, and the deaf cannot use most sound to perceive. What if we used technology to let these people receive this information? Instead of approaching this issue from a surgical perspective, we can design non-obtrusive wearable devices to simply communicate this sensory information using the other functional senses. For example, visual information regarding color, depth, and intensity of light can be delivered to the blind through audio signals. This may seem completely unintuitive, and raises the question of how can these users possibly interpret this data in a meaningful, practical way? This is where brain plasticity comes in. The brain can interpret these signals over time and adapt its neurochemical, synaptic, and receptor structures so that the user will subconsciously be able to perceive information without consciously analyzing the incoming signals. Incredible! Just as you could look at a car and perceive 'red' without consciously analyzing the frequency of photons, so too can an adapted brain perceive 'red' through audio without consciously analyzing the signals. This paper claims that technology has developed to a point where these systems are absolutely practical and can ready to be deployed to the general population given an effort to do so (this was in 2003!). Additionally, this paper describes the future three-pronged direction of this field. Firstly, make these systems more robust and inexpensive so they can be deployed large-scale. Secondly, explore the use of this technology not to replace lost senses for the disabled, but enhance sensory capabilities of everyone. Imagine being able to subconsciously sense magnetic fields, ultraviolet light, or any other interesting phenomena! Thirdly, this field serves as a great opportunity for us to learn more about the brain itself, and observing how the brain reacts to new input is an important gateway to knowledge in neuroscience fields.

Depression in the US: Some Numbers

The National Center for Health Statistics, a federal agency under the CDC dedicated to studying population health trends, publishes about 2-3 data briefings every month concerning health issues in the United States. From weight loss to infant mortality rates, these briefings cover a wide variety of topics, are relatively short, and contain to-the-point data figures and analyses. I strongly recommend going through their webpage and finding subjects you're interested in; I've found they're a great way to get acquainted with problems that a certain population or demographic face. There's one collective issue that's written about a lot, with many briefings covering it. One such briefing is titled 'Depression in the United States Household Population 2005-2006', and there are three takeaways I have from this. Firstly and intuitively, Americans who fall underneath the poverty line have vastly increased rates of depression. Starting at 12 years old, any person in poverty is on average four times as likely to have depression. Furthermore, 80% of all people who report being depressed describe tangible, functional impairment in their lives due to their condition. Finally, the vast majority (around 80%) of these people did not see a professional for treatment. All of these data points probably don't surprise you, but it's important to have these observations in mind as tangible facts regardless. Two years after this briefing was published, the Department of Health and Human Services launched a program named Healthy People 2020, a plan to accomplish 1,200 health objectives using measurable data and baseline targets. One of these efforts is to combat the prevalence of depression and reduce the occurance of suicide in the United States, but another briefing released in June 2018 showed that the results aren't going as expected. I'm not claiming that the 'Healthy People 2020' initiative has totally failed, as I really don't know too much about it. What is factual, however, is that the goal was to reduce suicide rates from 11.3 (per 100,000) in 2007 to 10.2 by 2020. Currently, at the end of 2018, the suicide rate is 13.5 and climbing slowly but steadily. These trends are marked by a recently published briefing titled Suicide Rates in the US Continue to Increase". Some notable statistics are the observation that females experience depression at about double the rate of males, but males commit suicide at a rate about 2.5x that of females. Additionally, the means of suicide differ significantly between sexes, but that's not too important to this underlying message. Personally, I think we've made a lot of great strides in socially recognizing mental health as a serious issue in our country but we obviously still have a long way to go. The facts show that bringing more people above the poverty line will absolutely lower rates of depression and suicide, with the apparent 'sweet spot' at a household income 4x the poverty line. Additionally, combining early detection procedures with normalizing treatment will help lower the number of untreated individuals. These solutions are pretty hard to implement in reality however. I can't think of a great, practical way to help remedy this problem using technology, but I think an answer is out there somewhere. Scientists and engineers have an ethical duty to solve the problems of society, and this is a huge (and growing) tragedy.

Affective Computing - Rosalind Picard 1995

The linked source is actually a summary of a book, so this is going to be a Russian doll synopsis of a summary. In the 1990's popular media typically portrayed computers as cold, calculating machines, and did so accurately. Most people pictured computers as large squares in their office that they could operate spreadsheets with, or a mainframe in a lab with big, flashing buttons. Rosalind Picard had the vision of changing this relationship between humans and computers in 1997 by introducing the idea of 'Affective Computing', defining affective as centered around emotion. The goal of this concept is to have a system that can interpret a user's emotion and adjust itself to best cooperate, or even attempt to express an emotion itself.
Before even trying to explain these systems on a computer science level, it's necessary to define what emotions are, their importance, and how they can effect a user. This requires a deep dive into the psychology and neuroscience fields that in themselves contain many debates and opposing theories, so completely understanding emotions likely isn't going to happen. What we can do is try our best! A popular psychologist Damasio describes 'primary' and 'secondary' emotions in a way that is helpful in understanding how a computer can sense emotion. Primary emotions focused on reflex-like reactions such as fear, disgust, and excitement that have concrete effects on one's blood pressure or galvanic skin response. Scientists can attempt to recognize these through body sensors to record the respective phenomena. Secondary emotions are centered around cognitive processes, such as sadness or contentment. These are much more difficult to recognize, as one person may express sadness through tears or slanted eyebrows but another can also reasonably express sadness with no facial change but a slow slur of speech. Knowing which features best express emotion is undetermined and requires progress in computer vision, speech recognition, and emotional research, but solving this problem could allow for an incredible amount of progress to be made.
Picard also outlined the interesting history of emotion research, and why emotion was often discredited in the computer science field for so long. There has been a clash in psychological communities about whether or not emotions are valuable components of our psyche, or if they simply hold us back and resign us to a 'lower' nature. Valid research on this topic only started in about the 1920's, and even still there wasn't a persuasive argument for emotions until the 60's and it wasn't strongly supported until the 90's! Since classical artifical intelligence fields were founded in the late 50's, the foundation of the field was rooted in hard logic and tended to discredit anything else. This trend continued even past valid research was published proving the usefulness of emotions in coping and decision-making processes, and even continues today speaking from personal experience. You could ask, "Yeah this is great and all, but why would we want to actually do this? This is a whole lot of work just to make an adaptive computer interface." Giving computers emotional intelligence might not solve every problem people have with computers and still leave an enormous amount of questions about interface design, but Picard claims that bringing affective computing to life would make the world a warmer and more humane place for everyone. I would agree. This summary still leaves so many questions about the application of an affective computer. What form would such a system take? What are the use cases? I think to better understand this vision we would need to read all 308 pages of Picard's writing.