What kinds of challenges and innovations will computer science experts face in the near future? Computer science is a world where abstract principles are realized in astounding practical forms. From an early history of room-sized supercomputers, it has progressed to the common consumer desktop and the mobile revolution. What could be next?
1) Advanced Robotics
Today’s robots are limited in both scope and sophistication. Ideal for rote tasks, they have achieved only limited capacity for situational “reasoning,” much less high-order abstract functions. Much growth would be needed for robotic labor to drive large-scale economic changes as the assembly line did. Yet, their limitations may be challenged in the near future.
The new wave of robotics incorporates more sophisticated sensors and streamlined forms making it possible for machines to work beside humans while presenting far fewer safety risks. While many roboticists are innovating at an approximately human scale, the advent of advanced drone technology has created many new uses for flying robots.
One of the top challenges in making robotics a reality is operationalizing an “intellect” that will let robots adapt to changing tasks. Although major issues still exist – see below for that discussion – cloud technology could be the key to centralizing and accelerating parts of AI development. Instead of each unit being programmed from scratch, robots could use distributed computing to download instruction sets as needed.
2) Artificial Intelligence
One limiting factor in the development of robotics is the linear nature of computer hardware. Even the most advanced motherboards, processors, and microchips must compartmentalize their resources for maximum efficiency under real-life conditions. Electrons – and data – must move from Point A to Point B to Point C and back again. Bottlenecks are a consistent issue.
In living things, biological systems that process data are fully intertwined at countless levels. The human brain has 100 billion neurons. Compare this to the 5.5 billion transistors in Intel’s Xeon Haswell-EP, a flagship 18-core CPU. Although nothing artificial approaches the complexity of a brain, the designs and sensor technologies used in today’s high-level robotics are inspired by nature.
As technology grows more able to mimic the functionality of the human brain, it may be possible to overcome some theoretical hurdles preventing truly adaptive AI. Researchers are closing in on the capability to develop transistors and even entire circuits built around DNA strands and chemical reactions – groundbreaking innovations sure to interface with traditional computer science in amazing ways.
3) Precision Genetic Engineering
Genetic engineering has enormous potential to help humanity overcome food shortages and other health crises, especially in areas of the developing world that have traditionally faced agricultural challenges. In practice, however, the debate over the best way to apply genetic engineering to human needs has often overshadowed its uses. Precision genetic engineering may finally put many concerns to rest while unlocking more of the potential.
Advanced genetic engineering using RNA interference and other innovations make it possible to approach desired outcomes with surgical precision, greatly outperforming current solutions based on agrobacterium tumefaciens bacteria. To deliver consistent, predictable results, however, new genetic engineering techniques will require multi-generational tracking and modeling.
What can provide the ability to sort through massive amounts of information? Big data, the trend toward evaluating an ocean of information characterized by volume, velocity, and variety far beyond what was known only a few years ago. With computer scientists at the helm, the combination of these technologies may make radical positive change possible within just a few generations.
4) Health Informatics
The traditional model of healthcare relies on self-reporting from patients. Unfortunately, this can leave many conditions undiagnosed until well after their progression is severe. With the help of computer science pioneers, health informatics is transforming the way healthcare is done, lending a a much more proactive mindset.
Cloud computing and wireless technology have combined to bring consumers wearable devices that provide important health information between checkups. It is easier than ever to spot the warning signs of hypertension, diabetes, and many other chronic conditions months or even years before they would present symptoms to the patient.
Consumer health informatics is the vanguard of this movement, but it is only part of the big picture. With increasing amounts of raw data, experts are better able to manage and monitor public health concerns. In the future, the data mining made possible here might converge with medical nanotechnology, providing responsive and tailored healthcare at all levels.
Nanotechnology focuses on engineering at the “nanoscale,” one to 100 nanometers. The tools needed to see individual atoms were not available until 1981, so in many ways, this science parallels the computing revolution that facilitated it. Researchers have discovered that materials at the nanoscale can have unexpected and novel properties, resulting in a wealth of breakthroughs.
Until recently, the majority of work in nanotechnology focused on the development of specialized materials, including ultra-light polymers and crystals with special conductive properties. These efforts have accelerated microchip development, making it possible to envision a near future where transistor count is orders of magnitude higher than it is today.
As nanotechnology matures, it will make possible a world of nanorobotics. Theoretical nanorobots could complete manufacturing tasks in a fraction of the time required now. Likewise, nanorobotics could combine with genetic engineering and health informatics technologies to treat diseases like cancer at the cellular level.