Most rugby players know that in rugby, it is important to run towards where the ball is going to be, not where it has been.
Similarly, great leaders have the ability to see around corners (what is coming next) in business and know how to harness disruptive influences to give their company a strategic advantage.
Seeing around corners or recognising and acting on disruptive inflection points before they happen has become an important part of the implementation of strategy in a world of technological innovation.
LIDAR (Light Detection and Ranging) technology is a more electronic way of seeing around corners.
LIDAR uses a pulsed laser firing at nano-second speeds to record the time it takes for the signal to return to the source, thus enabling a computer to generate a three-dimensional (3D) model with great accuracy.
This creation of a 3D image is partly similar to the echolocation or ultrasound waves a bat uses to “see” its environment and prey. LIDAR is used in self-driving cars and enables them to see around corners.
More recently, this technology has even been built into newer iPhone models to be used in augmented reality (AR) applications.
But today is not about the figurative seeing around corners in business or the amazing capabilities of LIDAR, but the ability of humans to literally see around corners. It may sound futuristic, but it seems that the future is closer than we may imagine.
A little more than three years ago, a team of researchers at the Moscow Institute of Physics and Technology and the Russian corporation Neurobotics recorded the brainwaves of people watching 20 minutes of ten-second-long video clips consisting of five topical categories. Just by analysing the brainwaves and interpreting the encephalograph data of the participants, the researchers were able to determine the category of video the subject was watching.
In the second phase of the research, the Russian scientists developed two artificial neural networks (ANN).
They trained the first one with data to create images from visual "noise" in three of the tested categories.
The second artificial neural network was used to turn encephalograph data into comparable noise. When combined, artificial intelligence was able to draw astoundingly accurate images of what a person was looking at solely from their real-time encephalograph data. To some extent, the computer and artificial intelligence were able to read the minds of people by interpreting their brainwaves.
Currently, Daniele Faccio and Gao Wang, researchers at the University of Glasgow in the UK, are also working to bring together artificial intelligence (AI) and human brainwaves, but this time, to identify objects around a corner that humans are not able to see naturally.
This technique is known as “ghost imaging” and can reconstruct the basic details of objects hidden from a person’s view by analysing how the brain processes very faint reflections on a wall or another surface.
“Ghost imaging” is not totally new since, in the past, video recordings of faint reflections cast by an object onto a nearby wall have often been used in research. However, the results were not always very accurate.
The research done by the University of Glasgow researchers is part of non-line-of-sight (NLoS) imaging, a technology that allows people to see objects that are obscured by obstacles. Often today, as in the case of LIDAR, a pulsed laser is beamed onto a surface, around a corner and back to a camera sensor. Artificial intelligence and algorithms are used to decode the scattered returned light to identify the object and create a three-dimensional image.
But Faccio and Wang’s research was very different. They projected an object onto a cardboard cut-out and then allowed a person wearing an electroencephalography headset to monitor their brainwaves to see only the dispersed light on the wall instead of the actual projected light patterns around the corner. The participants could not see the objects directly, but their brain registered the scattered light of the object.
An electroencephalograph (EEG) uses sensors on the scalp to record brain activity. In the Glasgow experiment, the electroencephalography headset read the brain signals in the visual cortex of the person and fed the data into a computer.
Brainwaves consist of electrical impulses when an individual's behaviour, emotions, and thoughts are communicated between neurons in the brain at a variety of frequencies and can therefore be accurately measured by an electroencephalograph.
In the end, the computer software was able to identify the obscured object through the use of artificial intelligence algorithms to decode the scattered light patterns and analyse the data gathered from the person’s brainwaves.
Repeating this process with different random patterns of light produced a sequence of data points about how the light intensity varies over time.
The researchers thus found that the strength of the electroencephalograph signal more or less correlated with the intensity of light on different areas of the wall. Eventually, they were able within a minute to accurately reconstruct the 16 x 16 pixel images of simple obscured objects of which the diffused reflections were flickered at 6 Hz for two seconds and were registered by the visual cortex of the person.
The significance of this research, according to Professor Daniele Faccio, a professor of quantum technologies at the School of Physics and Astronomy, is that: “This is one of the first times that computational imaging has been performed by using the human visual system in a neuro-feedback loop that adjusts the imaging process in real-time.”
The researchers could have used available technology to detect the objects, but their focus was on exploring possibilities to augment human capabilities in future.
According to Faccio, the next steps in their research will be to extend the capability of artificial intelligence to provide the three-dimensional depth information and combining data from multiple viewers at the same time for a more complete picture.
Some researchers used to joke about brain process studies using an encephalograph as similar to figuring out the internal structure of a steam engine by analysing the smoke left behind by a steam train. But it is now evident that encephalograph readings contain sufficient information to reconstruct an obscured image observed by a person.
Soon, humans may be able to see hidden objects in real-time when artificial intelligence is combined with human vision.
The new “ghost imaging” technique could indeed allow us to see objects around the corner by merely interpreting our brainwaves.
The application of this technology for vehicle drivers, pilots, the police and the defence force to augment human vision could make a huge difference in future safety.
But ghost imaging using human vision and artificial intelligence also opens up many completely novel applications, such as extending human vision into invisible wavelength regimes in real-time.
And most impressive of all is that the technique is non-invasive and does not need invasive neural interfaces that require complex surgery, as envisioned by Elon Musk and his neurotechnology company, Neuralink, which develops implantable brain-machine interfaces.
But this is not. All other recent research demonstrated that scientists are now able to read people's minds and could even tell their politics from a brain scan! Would that not be interesting in South Africa. But that is a topic for another day.
Professor Louis C H Fourie is an Extraordinary Professor, University of the Western Cape.
BUSINESS REPORT