Forget the battlefield radios, the combat PDAs or even infantry hand signals. When the soldiers of the future want to communicate, they’ll read each other’s minds. At least, that’s the hope of researchers at the Pentagon’s mad-science division Darpa. The agency’s budget for the next fiscal year includes $4 million to start up a program called Silent Talk. The goal is to “allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals.” That’s on top of the $4 million the Army handed out last year to the University of California to investigate the potential for computer-mediated telepathy.
A document entitled “Bioeffects of Selected Nonlethal Weapons” was recently declassified by the US Army in response to a FOIA request by Donald Friedman for information relating to “… the microwave hearing effect, the Frey effect, artificial telepathy and any device/weapon which uses and or causes such effect”. The document returned is part of a 1998 National Ground Intelligence Center study on nonlethal weapons technology the rest of which has not yet been declassified. It discusses in broad terms the effects of microwave radiation, laser light and sound on the human body and their potential for use in nonlethal weapons.
Microwave heating is discussed, citing a study conducted on Rhesus monkeys at 225MHz. The report dismisses microwave heating as being of limited suitability due to the long ramp up time to have an effect on the target (15-30 minutes). Contrast this with the Active Denial System that works at 95GHz and heats up the water molecules in the skin much more rapidly.
Microwave hearing is covered next discussing how the effect had been known for some time and indicating that the mechanism is probably due to pulsed RF frequencies causing thermoelastic expansion of the brain region around the cochlea. It goes on to discuss that it is quite possible to cause a voice to be heard in the skull due to this effect referencing a Walter Reed Army Institute of Research experiment which was able to send the words one to ten using this technique. The report suggests it as a possible way of communicating with hostages that their captors would not be able to detect.
The next technique discussed is disruption of motor control using a very fast pulse (nanasecond) at around the brain alpha frequency (15Hz). This causes disruption of the corticospinal pathways leading to muscle weakness, intense muscle spasms and loss of consciousness depending on the exact frequency. Also discussed is the effect of intense levels of sound (140-155dB) which cabn be used to cause spasmodic motion of the eyes and nausea. Finally the report covers the effects of exposure to high intensity laser radiation.
In all the report shows that considerable work has been done to understand the potential of electromagnetic radiation as a nonlethal weapon however no mention is made as to the deployment of these techniques in real weapons.
Victims of mind control report having their thoughts read especially those that are verbalized internally otherwise know as subvocal speech. Subvocal speech happens also when we read although some “speed reading” techniques try to get you to stop doing this because it slows the reading process down. It has long been known that during subvocal speech electrical signals are still sent to those muscles of the face and throat that participate in ordinary speech even though no sounds are generated. These signals can be detected in the form of electromyograms (EMG) using electrodes placed in the face and neck area.
In the previous article we saw how Lawrence Pinneo found some success in using electromyograms and electroencephalograms to detect thoughts in the form of subvocal speech. That was thirty years ago, this article looks at current day attempts to do this through the work of Charles Jorgenson at NASA. Jorgensen works at NASA’s Ames Research Center where they have been investigating alternative methods of communication and control in hostile environments where normal methods are not always possible. Examples include astronauts, fighter pilots and rescue workers.
In contrast to Pinneo, Jorgensen placed electrodes in the neck region only. In the first stage of processing signals from the electrodes were sampled at up to 10,000 times per second and run through a 60 HZ notch filter to exclude line interference and band pass filters to remove anything outside the 30 – 500 HZ range.
The next stage in processing marks a major change from Pinneo’s work in which the data is transformed from the time domain to the frequency domain. Jorgensen experimented with a number of transforms including Fourier and Wavelet transforms to do this and seemed to settle on a quad tree wavelet transform.
During the final phase features extracted from the transforms were input into different machine learning algorithms to train the system to pattern match the words or phonemes being examined. Jorgensen experimented with a number of techniques including neural networks and support vector machines. Once the system had been trained it was then used to attempt to match against new signals. In a range of different applications was able to achieve around 74% success for small vocabulary sets of up to 15 words.
The results continue to prove the feasibility of thought reading using subvocal recognition. However given the vast increase in computing power plus the many advancemnets in signal processing and pattern recognition since Pinneo’s time these results are somewhat disappointing although I should point out that nothing has been published by Jorgensen in 2-3 years. Not sure if we should read anything into that.
Despite this, some companies are looking to commercialize this technology. For example NTT DoCoMo are working on a subvocal mobile phone with the idea that people can answer their phone without annoying those around them such as in a movie theater. NASA are also working with QUASAR corporation to develop better sensors. In April 2006 Forbes magazine published an article on Jorgensen’s work entitled The Silent Speaker:
Jorgensen sees the day when electromagnetic sensors will be woven into the fibers of turtlenecks or rescue workers’ outfits. “As long as people have had machines and tools, they’ve been dependent on the physicality of the body,” Jorgensen says. “Separate those control activities from the body and it opens a whole new generation of interface design.”
QUASAR recently announced that it is working with NASA Ames Research Center to develop a hands-free UGV (unmanned ground vehicle) control system based on subvocal speech and forearm EMG. The purpose of the system is to allow soldiers to control the devices without having to set down their weapons or other equipment.
In December 2001 NASA’s Ames Research Center made a presentation to NorthWest Airlines in which they stated that they were working with a commercial partner to develop neuro-electric sensors to remotely monitor the EEG and ECG of passengers at airport security. This information was leaked to the Washington Times which published an article about it. The article drew denials from NASA and the Washington Times no longer has the article up on their site. The Electronic Privacy Information Center obtained details of the presentation under the Freedom of Information Act which can be seen here.
Techniques of EMG signal analysis: detection, processing, classification and applications, Raez, Hussain and Mohd-Yasin 2006
Small Vocabulary Recognition Using Surface Electromyography in an Acoustically Harsh Environment, Bradley J. Betts, Charles Jorgensen, 2005
Web Browser Control Using EMG Based Sub Vocal Speech Recognition, Chuck Jorgensen, Kim Binsted, 2004
The Silent Speaker, David Armstrong, Forbes Magazine, April 10 2006
The work of Lawrence Pinneo in the early seventies often comes up in mind control discussions. It is cited as evidence that a thought reading capability has been around for some time. A report by Pinneo is available at the “Christians Against Mental Slavery” website, so I decided to download it and take a look at what it was all about.
The report, “Feasibility Study for Design of a Biocybernetic Communication System”, written in 1975 details the findings of a three year study sponsored by DARPA and conducted by a team from SRI (Stanford Research Institute) lead by Pinneo, a neurophysiologist and electronic engineer. The work was part of a larger DARPA Biocybernetics Program whose directive was to evaluate the potential of biological measurable signals helped by real-time computer processing, to assist in the control of vehicles, weaponry, or other systems.
The work is remarkable for the time given the computing power available. The computers used were a CDC 6400 a machine and a LINC-8, machines which were typically loaded using punched cards and paper tape. The laptop I’m writing this blog on is probably on the order of many thousand times more powerful than these machines.
The stated goal of the research was
… to test the feasibility of designing a close-coupled, two-way communication link between man and computer using biological information from muscles of the vocal apparatus and the electrical activity of the brain during overt and covert (verbal thinking) speech. The research plan was predicated on existing evidence that verbal ideas or thoughts are subvocally represented in the muscles of the vocal apparatus.
Covert speech is more commonly called subvocal speech today and refers to verbal thinking where you think to yourself in words. At the time it was known that even though no sounds are generated during this covert speech electrical signals are still sent to the muscles of the face and throat which can be detected using electromyography similar to an electroencephalography. So the goal was essentially to investigate electrical signals from the brain (EEG) and muscle nerves (EMG) during covert and overt speech and to determine what information could be extracted from them.
Electrodes were placed on both the head and facial musculature to capture EEG and EMG readings. See the original document for exact placement. Subjects were placed in a shielded booth where words were presented on a screen and they had to either say(overtly) or think (covertly) the word they saw. The signals from the electrodes were captured and fed through an analog to digital converter and into the computer.
The processing of the signals was fairly straightforward. For the study he worked with a set of 15 words.For each word the signal amplitude was sampled over a 6 second period - three seconds before to three seconds after the word is uttered or thought. 255 samples were taken in total. For each word he constructed a template, basically all 255 sample points averaged over ten repetitions of the word.
When it came to predicting which word a subject was thinking of he would simply compare the measured sample to each of the 15 templates and calculate the root mean square(RMS) of each templates difference from the measured value. The word whose template had the lowest RMS value was chosen as the word the subject was deemed to be thinking.
This type of processing was repeated for a number of subjects for both overt and covert speech. A wide range of experimental factors was also tested such as electrode placement, male versus female, etc.
The report has a number of findings, here are some of the highlights
EEG responses for covert speech mimicked those of overt speech for the same subject, electrode, and spoken word. When sources of error were reduced as much as possible, correct computer classification rates ranged from 52 to 72%,
EMG values performed better than EMG/EEG combined which performed better than EEG alone
When templates of one subject were used to classify words based on individual responses of another subject, the percentage of correct classifications for EEG responses was no greater than chance expectation. The percentage of correct classifications for EMG was greater than chance, but not nearly so good as within subjects. Thus, each subject’s biological patterns associated with speech appear to be unique
Performance would be better if known sources of error were removed such as time and amplitude variations and muscle and eye movement artifacts.
Stored patterns should be “refreshed” periodically (i.e., new templates should be formed and updated) to take account of “drifting” cortical organization
The system performed best for subjects who have strong hemispheric lateralization for language.
In the end Pinneo concluded that
…it is feasible to use the human EEG coincident with overt and covert speech as inputs to a computer for such communication. However, we also conclude that, without additional research, the EEG is not adequate for the design of a practical operating system; indeed, other methods than those employed here may prove superior. Nevertheless, enough information has been obtained during this project to specify the optimum parameters to use for an EEG-operating system and to suggest future research toward that end. Our results show conclusively that consistent, repeatable patterns exist in the EEG during overt speech (for example, see Figure 19) and covert speech (Figures 13 and 14) and that a computer can recognize these patterns a statistically significant percentage of the time.
Although the work was commissioned by DARPA the work wasn’t exactly top-secret in fact Time published an article on Pinneo’s work in 1974:
Pinneo does not worry that mind-reading computers might be abused by Big Brotherly governments or overly zealous police trying to ferret out the innermost thoughts of citizens. Rather than a menace, he says, they could be a highly civilizing influence. In the future, Pinneo speculates, technology may well be sufficiently advanced to feed information from the computer directly back into the brain. People with problems, for example, might don mind-reading helmets (“thinking caps”) that let the computer help them untangle everything from complex tax returns to matrimonial messes.
This was definitely groundbreaking work and there is no doubt that it grabbed the attention of people within the defense and intelligence communities. In the non-classified world this work has been taken up Chuck Jorgensen of NASA who I’ll come back to in a future post.
Feasibility Study for Design of a Biocybernetic Communication System, L.R. Pinneo, D.J. Hall, 1975
Mind Reading Computer, Time Magazine 1974
Subvocal Recognition, Wikipedia