Blockchaingrade
AI Trends Insider AI Trends Insider on Autonomy autonomous cars robot cars robot taxis Robotics Self Driving Cars

Open Source Cyber-Hacking and AI Autonomous Cars



The use of open source software by developers of AI self-driving systems could risk ugly security problems down the road. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

You’ve likely had to enter a series of numbers and letters when accessing a web site that wanted “proof” that you are a human being and that you were not some kind of Internet bot. The typical approach involves your visual inspection of a grainy image that contains letters and numbers, and then having to try and figure out what those letters and numbers are.

It is intentionally made difficult due to the aspect that the letters and numbers are usually smashed together, and they are often twisted and distorted so as to be hard to discern.

These challenge-response tests are known as CAPTCHA, which is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart.”

The idea is that if a website wants to keep automated bots from accessing their site, there needs to be some means to differentiate between whether a human is trying to access the web site or whether it is some kind of automation.

Humans are quite good at visually being able to discern letters and numbers, and so the CAPTCHA aids in distinguishing whether the response is va a human or a bot. Automated systems have a difficult time trying to ferret out amongst a twisted and distorted mix of letters and numbers the intended indication of what those distinct letters and numbers are.

Some people don’t know why the CAPTCHA is being used and are pretty much just irritated by the whole thing.

Why do I have to look at this stupid and obviously messed-up list of random letters and numbers, asks those that are not in-the-know.

Makers of web sites are at times hesitant to use the CAPTCHA because it could dissuade people from using the web site and decrease the number of potential visitors to their site. But, those pesky automated bots that might otherwise become “false” visitors, meaning that the site might believe them to be actual humans, and also there are potential adverse aspects a bot might do at a web site, and so in the end it often is worthwhile to consider making use of CAPTCHA.

Now, you might be puzzled that the CAPTCHA is not readily able to be hacked by automation.

One might assume that with the tremendous advances in Artificial Intelligence (AI) in recent times, certainly there must be a means to figure out those grainy images via automation.

Well, it depends partially on how strong the CAPTCHA is. If the CAPTCHA makes use of a varied combination of letters and numbers that are really swerved and mushed together, along with varying the height and width of the characters, and if the number of such characters is sizable enough, the ability for an AI system to figure it out is quite limited today.

Of course, the worse it is for the AI also means that it is likely harder for humans to figure out too. And, if it gets too hard for humans, you’ll have neither bots nor humans being able to pass the test. That’s not very helpful in that it will simply prevent anyone or anything from succeeding – you might as well close down your web site since it won’t be accessible at all.

To properly recognize the CAPTCHA, you need to perform at least three key visual and mental tasks:

* Recognition

You need to visually examine the image and recognize that there are letters and numbers in it.

Any of the characters can be enlarged or shrunk in size, and can be at various angles. They can be stretched or squeezed together. The parts of one character might be merged with the parts of another character. The number of variations is seemingly endless of how the CAPTCHA can obscure conventional letters and numbers.

Humans seem to be able to relatively easily handle these invariant recognition aspects, namely that we can very quickly realize the essence of a letter or number shape, in spite of the distortions made to it.

* Segmentation

If I show you a letter or number and it is displayed on a standalone basis, such as the letter “h” and the letter “e,” you have a much easier time generally of figuring it out.

On the other hand, if I merge them with other letters and numbers, such as pushing together the “he” and making each flow into the other directly, it typically becomes harder to discern. The true shape of the letter or number becomes masked by its being merged with other letters and numbers.

You need to be able to mentally disentangle the crammed together numbers and letters into a series of distinctive chunks, and within each chunk try to reconstruct what the individual letter or number might be.

* Contextual

By having multiple letters and numbers, you can often improve your odds of guessing any individual letter or number by considering the context of the characters within the overall image. That being said, many CAPTCHAs don’t use regular words, since it would make things perhaps too easy to guess the individual characters. If the letters were “d” “o” “g” and you were able to guess the first two letters, it might be overly easy to guess the third letter. If instead the letters were “d” “g” “o” then you might not so readily be able to guess the entire set of letters because it does not make into a word that you would normally recognize.

There are numerous variations nowadays of CAPTCHA algorithms.

Some use just letters and numbers, while some also add into the mix a variety of special characters such as an ampersand and a percentage symbol.

You’ve likely also encountered CAPTCHA that ask you to pick images that have something in common. For example, you are presented with six images of a grassy outdoor field, and are asked to mark the images that have a horse shown in the image. These aren’t so easy because the horse will often be obscured or only a small portion of a horse appears in any given image.

The reason why the acronym of CAPTCHA mentions a Turing test is that there is a famous test in the field of AI that was proposed by the mathematician Alan Turing about how to determine whether an automated system could exhibit intelligence.

The test consists of having a human ask questions or essentially interview another human and a separate AI system, for which the interviewer is not privy beforehand as to which is which, and if the interviewer is unable to tell the difference between the two interviewees, we presumably could declare that the automation has exhibited intelligent behavior.

There are some that are critical of this test and don’t believe it to be sufficient per se, but nonetheless it is quite famous and regarded by many as a key test for ascertaining AI.

In the case of CAPTCHA, the Turing test approach is being used to see if humans can outwit a bot that might be trying to also pass the same test.

Whomever is able to figure out the letters and numbers is considered or assumed to be a human. Thus, if the bot can indeed figure out the CAPTCHA, it momentarily has won this kind of Turing test. I think we would all agree that even if some kind of automation can succeed in winning in a CAPTCHA contest, we would be hard pressed to say that it has exhibited human intelligence. In that sense, this is a small and extremely narrow version of a Turing test and not really what we all truly intend a Turing test to be able to achieve.

In fact, because the human is having to essentially prove they are a human by passing a CAPTCHA, some refer to this test as a Reverse Turing test.

Here’s why.

The limelight of a conventional Turing test is for the automation to prove it has human-like capabilities. In this reverse Turing test, it is up to the human to prove that they are a human and able to perform better than the automation.

There is a popular CAPTCHA algorithm used as a plug-in for many WordPress developed websites that is known as “Really Simple CAPTCHA.”

In a recent article about it, a developer showed how easy it can be to develop a simple AI system to be able to succeed at cracking the CAPTCHA challenges.

The CAPTCHA in this case consisted of a string of 4 characters that used a mixture of four fonts, and it avoided using the letters “o” and “i” to reduce any confusion by the humans that have to try and figure out the CAPTCHA generated images. Notice that by these limitations it becomes a much smaller problem to be solved, in the sense that rather than say using a string of 10 characters and using 25 fonts, and by eliminating some of the letters, the solution space is a lot smaller than otherwise.

The developer wanting to crack it used the popular Python programming language, along with the OpenCV set of programs that are freely available for doing image processing, and Keras which is a deep learning program written in Python. He also used TensorFlow, which is Google’s machine learning library of programs (Keras uses TensorFlow). I mention the tools herein to be able to emphasize that the developer used off-the-shelf programming tools. He didn’t need to resort to some “dark web” secretive code to be able to proceed to crack this CAPTCHA.

The CAPTCHA program was readily available as open source and therefore the developer could inspect the code at will.

He then used the CAPTCHA to generate numerous samples of CAPTCHA images, doing so to create a set of training data. The training data consisted of each generated image and its right answer. This could then allow a pattern-matching system such as an artificial neural network to compare each image to the right answer, and then try to statistically figure out a pattern for being able to go from the seemingly inscrutable image to the desired answer.

After doing some transformations on the images, the developer fed the images into a neural network that he setup with two convolutional layers and with two hidden connected layers. According to his article, by just having ten passes through the training data set, the neural network was able to achieve full accuracy. He then tried it with actual new CAPTCHA generated by the “Really Simple CAPTCHA” code, and his efforts paid-off as it was able to figure out the letters and numbers. This particular article caught my eye due to the claim that from the start of this project to the finish it took just 15 minutes of time.

Now please keep in mind that this was a very simple kind of CAPTCHA.

I don’t want you to get a misleading impression that all CAPTCHA is as easy to crack as this.

I assure you that there are CAPTCHAs today that nobody has any kind of AI or any software that can crack it with any kind of assurance or consistency. CAPTCHA is still a relatively good means to try and distinguish between a human and a bot. The CAPTCHA just has to be tough enough to weed out the commonly used methods of cracking the CAPTCHA.  By convention, CAPTCHA is normally made available as open source code.

Thus, some would say that it increases the chances of being able to crack it.

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are using open source software to develop AI self-driving systems, and so are most of the self-driving car makers and tech firms, and this is both a boon and a danger.

As discussed about the CAPTCHA algorithm, it was available as open source, meaning that the source code for it was publicly available. Anyone that wanted to look at the source code can do so.

By looking at the source code, you can figure out how it works. By figuring out how it works, you are a leg-up on being able to find ways to crack it.

If you don’t use open source code, and instead develop your own proprietary code, you can try to keep the source code secret and therefore it is much harder for someone else to figure out how it works.

If an attacker does not know how the code works, it becomes much harder to try and crack it. This does not mean it is impossible to crack it, but merely that it is likely going to be harder to crack it.

Some refer to the open source approach as a white box method, while the proprietary code approach as a black box method. With a black box method, though you know what comes into and out of it, you don’t know what is going on inside the box to do so. Meanwhile, with a white box method, you know what goes into it and comes out, along with how it is doing its magic too.

Today, open source code is prevalent and found in an estimated 95% of all computer servers, along with being used in high profile systems such as the systems that run stock exchanges and the systems that run the International Space Station. Some estimates say that there is at least 30 billion lines of open source code available, but even that number might be understated.

Notably, open source is extensively used for AI software and many of the most popular AI packages today are available as open source.

Generally, there is an ongoing debate about the use of open source as to whether it is unsafe because of the potential for nefarious hackers to be able to readily inspect the code and find ways to hack it, or whether it is maybe safer than even proprietary software because you can have so many eyes inspecting it.

Presumably, something that is open to anyone to inspect can be seen by hundreds, thousands, maybe millions of developers, and that such a large number of reviewers will ensure that the open source code is safe and sound to use.

One caveat about using open source is the classic use-it-and-forget-it aspect that arises for many developers that decide to use open source code in their own systems.

Developers will go ahead and wrap the open source into a system they are building, and pretty much move on to other things. Meanwhile, if a hole is spotted in the publicly posted open source, and if there is a fix applied to the hole, the developer that grabbed the open source at an earlier time might not be aware of the need to apply the fix in their instance. This can happen readily by the aspect that the developer forgets they used that particular open source, or maybe they don’t become aware of the fix, or they no longer have anything to do with the developed proprietary code and others that are maintaining it don’t know that it includes the open source portions.

One of the most infamous cases of open source being exploited consists of the Heartbleed computer security hole that was discovered in the OpenSSL cryptographic source code.

In OpenSSL, there is a part of the code that sends a so-called heartbeat request from one system to another system. This is an important program that is used by most web sites to ensure a secure connection, such as for doing your online banking.

When making the request, the requesting system would normally send a message of one size, let’s say 10 characters in size, and expect to get back the same message also of 10 characters in size. Turns out that if the requesting system sent a message that asked to get back 300 characters but only sent 10 characters, the system providing the response would be misled into sending back 300 characters — of which, 290 of those characters might contain something sensitive from that system inadvertently. In programming parlance, this is often referred to as a buffer over-read problem.

In 2014, this hole immediately became headline news once it was pointed out.

The significance of the hole was that it made zillions of interacting systems that were thought to be secure to potentially not be so secure.

The clever name of “heart bleed” was given to this security hole, since it is related to the heartbeat portion of the systems and was now essentially bleeding out secure info. The hole was quickly plugged, and the matter was logged into the global registry of Common Vulnerabilities and Exposures (CVE) database for everyone to know about. Nonetheless, many did not right away apply the fix to their systems, even though they should have done so.

Currently, most of the automakers and tech firms are feverishly incorporating all sorts of open source into their AI of their self-driving cars systems.

It makes sense to do so, since otherwise you would need to reinvent the wheel on all sorts of software aspects that are needed for a self-driving car.

The cost to develop that same open source from scratch would be enormous. And, it would take time, lots of time, in order to create that same code. That’s time that nobody has. Indeed, there is a madcap rush today to achieve a true self-driving car, and no one developing self-driving cars wants to be left behind due to writing code that they could otherwise easily and freely get.

We do need to ask some serious questions about this.

Does the use of open source in the AI and the other software of the self-driving cars mean that we are laying ourselves bare for a substantial and really ugly security problem down-the-road, so to speak?

Some would say, yes.

Are there nefarious hackers that are right now inspecting the self-driving car open source code and looking for exploits?

Some would say, yes.

If they are looking for exploits, there’s not much reason right now for them to reveal those holes, and so they presumably would wait until the day comes that there are enough self-driving cars on the roads to make it worthwhile to use such an exploit. Plus, once self-driving cars do become popular, it is likely to attract hackers at that time to begin inspecting the open source code, hopeful of finding some adverse “golden nugget” of a hole.

This open source conundrum exists for all aspects of self-driving cars, including:

  • Sensors – open source software for sensor device control and use
  • Sensor Fusion – open source software for sensor fusion
  • Virtual World Model – open source software for virtual world modeling
  • Action Planning – open source software for creating AI action plans
  • Controls Activation – open source software to activate the car controls
  • Tactical AI – open source software for self-driving car tactical AI
  • Strategic AI – open source software for self-driving car strategic AI
  • Self-Aware AI – open source software for self-driving car self-aware AI

Depending upon how a particular car maker or tech firm is building their self-driving car, each element is likely to either have open source in it, or be based upon some open source.

It is incumbent upon the self-driving car industry to realize the potential for exposures and risks due to the use of open source.

Self-driving car developers need to be make sure they are closely inspecting their open source code and not just blindly making use of it.

Any patches or fixes need to be kept on top of. We need more audits of the open source code that is being used in self-driving cars. And, overall, we need more eyeballs on reviewing the open source code that underlies self-driving cars. As mentioned earlier, it is hoped that the more “good” eyeballs involved will mean that any holes or issues will be caught and fixed before the “bad” eyeballs find them and exploit those holes.

If the bad eyeballs have their way, it will be not so much a CAPTCHA as a GOTCHA.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.



Source link

Related posts

Computational Periscopy and AI: The Case of AI Self-Driving Cars

blockchaingrade

AI Helping Recyling Industry Improve Accuracy, Speed Sorting Rate

blockchaingrade

What’s Next For Robotics: In The Field, Inferencing On The Edge

blockchaingrade

Leave a Comment