fbpx

In this Day, when the internet supplies all the human needs, an MIT Professor suggest disconnecting from the internet.  He simply no longer trusts the internet.  I would not either in a number of ways.  The internet can act, in many ways, like a crutch.

If you don’t use it, you will lost it. 

This adage applies to muscles, tools (that get rusty without use) and our minds.  Example:

“What is your mother, father, brother or other family member’s phone number?”

Person:  “Well, it’s in the phone…”  Holding up their phone.

“Your phone is gone, destroyed, missing or not working with a tower.  But you have access to another phone.  How do you call them?”

Inevitably, 90% of people blink at me while about 10% (or less) know the Critical phone number.  A few people blink with realization that not knowing the phone number is wrong.  Whether they’ve taken action or not is debatable.

Answer:  Dial the phone with your fingers.  Exercise your mind.  Don’t allow technology to sap your brain.  Work your brain like your muscles.

In this article

Dr. Herbert Lin, one of the nation’s pre-eminent thinkers on cybersecurity policy, shuns the internet-connected devices that fill some American homes.

He’ll have nothing to do with “smart” refrigerators, hands-free home speakers he can call by name, intelligent thermostats and the like.

“People say to me, ‘How can you have a doctorate in physics from MIT and not trust in technology?’ And I look at them and say, ‘How can I have a doctorate in physics from MIT and trust technology?’ ” Lin said.

Dr. Lin understands two components of history as far as I can tell.  First, if a human being can make a device, another human being can ‘unmake’ or hack that device.  And the Second, that human beings have consistently made technology without understanding the Law of Unintended Consequences.    Why is Professor Lin worried?

Part of what he distrusts is the “internet of things,” and the ease with which hackers can penetrate “smart” devices with digital worms and shanghai them into massive robotic networks to launch crippling digital attacks or generate ever greater quantities of spam.

It is a mistrust based on mathematics. Internet-enabled devices are exploding in number. Gartner, a research giant in technology, says the devices will climb from 6.4 billion at the end of last year to 25 billion by 2020. Such growth sharply augments the power of hidden robotic networks, or botnets.

Internet hacks and bots are pervasive and malicious malware that damage infrastructure and lives.  And cost billions in economic damage too.

A botnet already made headlines once. Last Oct. 21, a botnet slowed internet activity to a crawl along the Atlantic Seaboard. A hacker using a malicious worm dubbed Mirai – Japanese for “the future” – took over thousands of internet-connected security cameras and other seemingly innocuous devices and ordered them to fire relentless digital “pings” at a New Hampshire company, Dyn, that oversees part of the backbone of the internet. Dyn was overwhelmed, and popular sites such as Twitter and The New York Times were temporarily inaccessible.

The near future question is AI – Artificial Intelligence – when humans hand over more and more decisions to machine rather than their imperfect intelligence capabilities.  Yes, humans make mistakes – plenty of them – yet we have to ask ourselves:  What is the Human Journey but mistakes fixed and appreciated?  Acquiring the hard work of hard lessons learned.  The virtue of work, diligence and focus brought about due to adversity and hard decisions.  A bank of AI computers and robots could thread through its algorithims in a fraction of a second to come up with the right and ‘best’ decision.

But how does AI consider human emotions.  Individualized human emotions?  Does the AI take into consideration all the possible factors?  Are the factors under consideration only the ones in the AI database questionnaire – which makes sense since those are the only factors programmed into the AI by programmers?  Accounting for every single – individualized – human factor would be nearly impossible.  A majority, possibly.  But not all.

In addition, the AI decision tree would become a collective of the most common questions.  Would human shape their decision making after the AI framework, diminishing their own humanity and individualism?

If the goal is to make AI more human in its form and handling, then is the equal and opposite reaction that humans become more machine like?