Everyone and their grandma will be able to drag and drop a few functions with fun emoticons to create fully functional applications. This is not the future, it is happening right now. But the underlying technology will become so big and complex that no one person will understand the whole of it.
Before computers were a thing, we had math. Groups who knew Mathematics had strategic advantages over those who didn't. They were the victorious in war, in architecture, and forgive me here but they were better at reasoning. Math allowed people to solve real world problems efficiently. Problems could be transformed into ideas written down on paper and shared with a greater audience. Math is universal and knows no or very little language barriers.
As important as Math was, it was limited by the ability of the person doing the computation. Human speed in computing Math equations and solving them varies greatly from person to person. We had to wait for some advances in chemistry and physics to build physical devices that could perform these computations at a consistent speed. From vacuum tubes to silicon, we transitioned from the human brain to the CPU. The CPU can receive a problem on one end and output a solution on the other end, and it does so at an incredible speed.
To understand where computer technology will take us, it is important to study the phases it went through.
Phase 1: Theory
Computing starts with a theory, and we had come up with many theories throughout the centuries that couldn't be properly tested. Programming languages are the tools we use to convert those theories into machine code that computers can process. Today's computers are advanced enough to support simulating any sort of problem. From Newtons laws of gravity to genetic code, all can be (and is) simulated in a computer today. Old ideas that only existed on paper are now implemented on a computer and run at an incredible speed.
Matrix algebra was used to solve problems for centuries, but never at the speed we have today. Converting it to machine readable code allowed us to create the complex graphics we take for granted today.
The computer helped us improve, or even rectify some of those theorems. As computers become faster, programmers tackle new challenges we face. Challenges like processing large amounts of information to make sense of it.
Phase 2: Text
History starts when our ancestors started to write down their stories. From hieroglyphs to Latin, from calligraphy to Ethiopic, we read our history through text. And computers, being much faster then humans, can process text at a much faster rate. Reading a physical book may be more pleasant, but if you are trying to find a particular passage, searching through a computer file is much faster.
When the web was born, the bulk of the information that was created was in text format. Computers became more accessible to the public, it gave us the ability to create text with a low barrier of entry. We basically created text versions of most the information we had in the past. Processing text data is not only easy, it is also widely practical. Search engines have made it second nature to find an obscure piece of information, and a search engine is just a (sophisticated) text processor.
Text is easy to process but hard to understand. The US Constitution is reinterpreted every day, yet it is the same unchanging text. Different meanings come out of it over time. The computer itself cannot offer more insights from a text, it only processes bits of zeroes and ones. Add Natural Language Processing (NLP) to the equation and we see an attempt to make sense of text. Note that the first text translating machines where patented in the 1930s, way before the invention of the modern transistor.
Phase 3: Image, audio and video
For the last few decades, we have watched our computers double in speed every couple years. As a result, not only we can process text on our machines, we can also work with graphics and audio.
The result is detecting faces and objects in real time on our cameras, recognizing the song that is playing in the background, creating synthetic text to speech engines, and creating high quality video right on your phone. There are more than 2 hours of video uploaded to YouTube every second. Netflix accounts for 15% of internet traffic. We are ready for video.
But audio and video are even harder to understand by a computer. At best, we make sense of the data by converting it to text. Still, adding these mediums to the equations can improve the understanding of data by providing even more perspectives.
Phase 4: Environmental Data
When future generations will turn to our history, they'll read the text we leave behind, look at our photos, listen to our voices and watch us live our day to day lives. But another new piece of data they will find useful is learning about the environment we lived in.
Our world is 3 dimensional, and there is much more information generated than the eyes could see. Examples are weather data, GPS location, time of the day, or even smells. These are data points that can help make more sense of the information.
Looking at a 2D planes can only explain so much information. Trivializing environmental data will jettison us into the next phase of computing.
Imagine there is a fire and firefighters can get more information about it before arriving on the scene. A device can say that there was a smell of gas coming from the kitchen, or overcooked jambalaya, or burnt copper wires at a specific location. All this information makes it easier to fight the fire, or even prevent it.
Location with GPS has already revolutionized transportation. We can now travel to previously unknown places without much effort. Adding smart signs on the road that broadcasts environmental data will also help autonomous cars navigate better.
Augmented reality works by looking at the environment not only through a flat video feed, but around the objects that make up the scene. Recognizing objects, identifying sounds, and capturing environmental data like weather, smells, location, moisture in the air, all this could improve the interpretation of information.
Predicting the future is bound to be flawed. We only ever get it right because people predict everything and someone is bound to be right about something eventually. With the disclaimer out of the way, my prediction is that phase 5 is a context driven interpretation of data. We have collected a lot of information, now we have to put it all together to make sense of it.
As much as AI or machine learning is being used today, we haven't developed a way to ask the computer why. We know how to teach an algorithm how to identify a cat on a picture, but we can't ask it why it thinks it's a cat.
As for context, the meaning of information can change drastically depending on the context. Having a thousand loyal workers may describe a noble man. Having a thousand unpaid workers is slavery. Having a thousand unpaid workers after a hurricane is voluntary work. Just adding a date to this information as context will drastically change it's meaning.
Context driven programming is not new, but it is an interesting field that can help us make sense of data. Computers may still not be able to understand us, but if we can use it to better understand our world, that is a goal worth pursuing.