Blockchaingrade
AI Trends Insider AI Trends Insider on Autonomy autonomous cars robot cars robot taxis Robotics Self Driving Cars

Bounded Volumes and Virtual World Models Of AI Autonomous Cars



The AI self-driving car needs to see each vehicle, as well as each pedestrian, as a Bounded Volume or virtual container, so its virtual world modeling system can avoid collisions. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

The car ahead of me on the freeway had several lengthy construction poles tied to the top of their mid-sized vehicle and the flopping poles were dangling far beyond the rear-end of the zipping along car.

In California, the legal requirement is that any overhanging or projecting items that are beyond four feet of the tail-lamps must be adorned with a red cloth or a fluorescent orange flag.

And, if the driving will happen at night time then there must be two red lights also attached to the overhanging projection  (per our California Vehicle Code CVC Section 35410).

None of those legal requirements were being met by this driver.

For the moment, it was daytime. In about an hour it would be nighttime. As a minimum, the red or orange flag should have been used, and I think it was a likely bet that the driver was going to be on the road past sunset too.

Tsk, tsk, flouting the law.

In any case, the immediate concern was whether other drivers would realize that the poles were protruding.

Given the time of day and the dwindling sunshine and given that many here in Southern California drive maddeningly and don’t pay close attention to the road and roadway obstacles, I had little doubt that some nutty driver might accidentally get so close to the rear of the pole-totting car that those poles would potentially ram into a windshield.

Based on the nature of the poles and how they were tied down to the roof of the car, I gauged that the poles would be unlikely to poke a hole in the windshield of another car and probably not create much damage, other than a few scratches and dents, but the real problem would be the reaction of the driver that rammed into the poles.

Would the driver that hit the poles suddenly be shocked and surprised, and in that mental capacity opt to make a wild maneuver?

Doing so at the pace of 65 miles per hour was bound to create havoc.

All it takes is for one driver to do something untoward and it causes a cascading impact to other nearby drivers. Another consideration was whether those poles might be jostled by hitting a windshield and then spill onto the freeway, which could also create a cascading array of cars weaving and dodging those poles. More chances of cars hitting each other.

Did the driver of the car that had placed the poles onto their roof realize the problems inherent in their actions?

Were they lazy in not putting on a red or orange flag, or did they assume it wasn’t needed, or maybe they thought that it was too much trouble and assumed it was “obvious” that the poles were overhanging the car?

Or, perhaps the driver figured that no other car would venture so close to their car that the protrusion would make a difference. If we always kept the requisite distance from other cars based on the speed charts, such as one car length for every ten miles per hour (no one in Los Angeles seems to abide by this!), presumably you would not encounter those precariously dangling poles.

I kept my distance from the car. Doing so though merely led to other cars deciding to jump in between me and the car ahead of me. The rule-of-thumb on our freeways is a dog-eat-dog world in which any available gaps are immediately considered available for intrusion. Indeed, some drivers here insist they are doing all the rest of us a great service by compacting traffic. In their minds, any unused gap between cars is unsightly wasted space and merely elongates the traffic woes on the freeway.

As other cars came into my lane, some appeared to notice the poles, while others did not. I saw one car that started into my lane, realized the poles were poking out, and the driver then retreated back into their own lane. Another car, one that was relatively close to the ground, a smaller sports car, smoothly went under the poles, doing so as the lane switching driver used my lane to get over into the next lane to my right. A typical move by a sports car driver. Why switch lanes one at a time when you can make a multi-lane swift change, the NASCAR way.

Eventually, the driver of the flaunting poles decided they were nearing their exit and began to move toward the exit lane. When the driver made each lane switch, they seemed to misjudge the amount of distance they needed to safely have available. I doubt the driver was calculating the added length of the car by including the additional length of the overhanging poles.

This driver not only didn’t flag the poles, they seemed to not care about how the poles changed the dynamics of the driving task.

Sad, but not particularly surprising.

Bounded Volumes And Cars

Most of us likely don’t think too much about the overall size of our cars, at least not consciously per se.

Once you get used to the size or dimensions of your car, you pretty much know it by heart. If you drive the same car over and over, each day, you can almost feel the outer edges of the car. You instinctively know if you can make the tight corner or squeeze your car into that parking space that you had your eye on.

In contrast, when you rent a car, assuming it is a different make and model from your usual car, it will probably take you a few minutes of driving to get used to the dimensions of the car.

I remember renting a large-sized SUV for a camping trip and it was hard at first to sense where the four corners of the car were. I parked in front of my house and suddenly realized that the SUV stuck out beyond the usual spot that I park my normal car. When I went over to the mall to get some groceries, a parking spot opened up near the store, but as I slowly maneuvered the car into the spot, I realized that trying to park the beast in a compact-sized space was not a good idea.

A quick question for you. Take a moment to visualize in your mind the car that you usually drive.

What is the length of the car?

What is the width of the car?

Most people aren’t readily able to state in feet or inches what the length and width is. They just “know” about how big their car is. They can feel it when they drive. The moment you try to parallel park your car, you often become acutely aware of the size, since you are trying to shove it into a spot that often times just barely allows it to fit.

What about the height of your car?

Most people aren’t so sure what their precise height of the car is. They can guess. They also generally are able to discern whether their car will be able to go underneath a roadway sign or a bridge. This can be tested though when you go through a drive-thru eatery and they have a posted sign that warns what the height limit is. I’ve seen some drivers that inched forward, unsure of whether their roof might hit something, especially if they had a ski rack or a surfboard rack on their rooftop.

In terms of estimating the size of something, I am reminded of a notable aspect that happened when my children were quite young.

We had gone to a ceramics shop that allowed us to buy various ceramic objects and then paint them while there in the store. It was a fun activity. After painting the objects, you left them at the shop for a day or so, allowing the shop to cure them, and then we’d come to pick-up our now shiny painted ceramic objects.

Upon arriving at the store a few days after painting the objects, consisting of a ceramic bunny and ceramic egg (done for Easter!), we looked at them with tremendous glee and pride. They looked superb, like a professional had painted them. I asked the kids to go get two cardboard boxes from the other side of the store so we could put each of the ceramic objects into a box and safely transport them.

One of the kids came back with a box that was so small that neither the bunny and nor the egg could fit. The other one came back with a box that was quite large and would inadvertently allow the bunny or egg to likely roll around and possibly get damaged while carrying the box. I pointed out these discrepancies and politely requested that they try to find a box that might be a more appropriate fit. You might say a Goldilocks sized box, one that was not too small, nor too large.

They happily did so.

We then packed the bunny and the egg into each of their own respective boxes. A successful effort and one that to this day is showcased on my mantle, notably bringing those objects out into the open when Easter comes around, symbols of the love and joy that went into crafting them.

What does a ceramic Easter egg and bunny has to do with a car on the freeway with overhanging poles?

Good question, and here’s the answer.

Judging the dimensions of a box needed to fully contain an Easter egg or bunny is akin to envisioning a kind of virtual box that might surround the dimensions of your car.

I’d like you to once again think about the car that you usually drive.

This time, rather than trying to state the dimensions, instead try to imagine a box into which your car might fit. When I say the word box, I suppose you can think of it more like a container, like say a shipping container. If we were going to try and ship your car to someplace, what sized container (or box) would you need?

You could make a wild guess and select a container or box that is twice the size of your car. In that case, you’d be sure that the car would fit. But, suppose I told you that there was an added cost as the size of the container or box increases. Thus, aiming high is going to be more costly.

I’m sure you would then adjust your guess and aim at a smaller container or box.

Suppose the container or box is too small? That won’t do, since you need to make sure that every inch of your car fits into the container or box. You can’t have anything stretching beyond the container or box. Ideally, you want the container or box to just fit, ensuring that all aspects of the car are contained within the box, and not being much larger since the added cost of the larger size is something you are trying to avoid.

From your school days, I’m sure that you know that this box or container is going to have three dimensions. If all three of the dimensions are the same, you have a cube. In which case, the volume of the cube is each side times each other, or the side cubed, often written as V = a^3. If the sides are unequal, the volume of the resultant cuboid is typically written as V = l x h x w. This consists of the length, times the height, and times the width. Some prefer to say that the width is the depth, which is fine if that’s what you feel more comfortable with.

Most cars are pretty much in the shape of a cuboid, meaning that the length, height, and width differ.

We don’t need necessarily though to make the container or box in the shape of a cuboid. Maybe we could fit your car into a pyramid shape, the volume being 1/3 B x h. Or maybe into the shape of a sphere, the volume being 4/3 pi r^3. And so on.

The easiest way to imagine the container would be to assume it is going to be a cuboid shape. Admittedly, this might not be a very efficient container depending upon the shape of your car. If we used some other shape, either a regular one like a sphere or pyramid, maybe it would be a better fit. Or, if we could create the entire design of the container on our own, we might shape it in an irregular fashion, curving it here and there to make it fit in a skintight way with your car.

When trying to place your car into a container of some kind, let’s refer to this as the Bounded Volume (BV). I want to have you help me put your car into a container, which will consist of some amount of volume, and will be bounded by the shape that we use. The BV could be a regular shape and a relatively simplistic cube or cuboid. Or, you might conceive of another less usual shape, such as a cone or pyramid, or you might fashion a rather irregular shape that conforms in some other tighter manner to your car.

AI Autonomous Cars And Bounded Volumes

What does this have to do with AI self-driving autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars.

One of the intriguing aspects about AI self-driving cars, and an aspect often not much discussed, involves the need for the AI to be able to detect objects and essentially craft a Bounded Volume (a virtual container) around those detected objects. This is a key effort to the rest of the AI driving system and especially the virtual world modelling aspects.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of the Bounded Volume, let’s consider why this is such an important matter to the AI self-driving car.

The AI system of the self-driving car is going to have various sensory devices, including cameras, radar, ultrasonic, LIDAR, and so on. We’ll focus on the cameras for the moment.

While the AI self-driving car is driving around, the cameras are capturing visual images of what is surrounding the self-driving car. These images are streaming into the cameras. On-board the self-driving car, there are computer processors that are tasked with analyzing those images. The image processing is looking to see if there are any street signs, and whether there are any nearby pedestrians, and whether there are cars nearby, etc. This has to happen in real-time. Time is crucial.

The image processing doesn’t have the luxury of acting in a lackadaisical manner. When a car is moving along on the freeway at 65 miles per hour, and other cars are whizzing past at the same or faster speeds, there is not much time to spare when ascertaining the traffic situation. It is incumbent upon the image processing to as quickly as possible parse the images and identify what is happening.

A human driver takes for granted that they can see their surroundings. They usually don’t put much thought into this aspect since it seems obvious and expected. Of course I can see that pedestrian across the street. Of course I can see that roadway speed limit sign. But, if I add heavy fog into the situation, the human driver will be reminded of how difficult it can be to sometimes see your surroundings. There is a lot more going on in your head than you might otherwise assume.

For my article about cognition timing aspects of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

For the dangers of myopic use of sensors, see my article: https://www.aitrends.com/selfdrivingcars/cyclops-approach-ai-self-driving-cars-myopic/

For my article about what happens if a sensor fails, see: https://www.aitrends.com/selfdrivingcars/going-blind-sensors-fail-self-driving-cars/

For safety aspects of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Image Processing Aspects

How does the image processing try to do the same thing that humans seem to do with ease (most of the time)?

Upon inspecting the streaming images, the image processors attempt to dissect the images and figure out what identifiable objects exist in the scene. Is there a pedestrian over there? Is that a bike rider? Is that a car up ahead? Is that a car to the right?

By-and-large, most of what you might see on the roadway is relatively predictable, meaning that you can expect to see human pedestrians, you can expect to possibly see animals such as dogs or deer, you can expect to see other cars, and so on. As a human, you’ve learned or somehow come to know that a pedestrian is a human that is walking, standing, sitting, or otherwise a human-like figure that likely has arms, legs, a head, a body, feet, hands, and other such elements.

While driving your car, you look across the street and see a figure that consists of a head, a body, arms, feet, and the rest. In your mind, you somehow click to the notion that it is a human. Oh, but wait, it turns out that it is a statue of a famous president, placed at the corner of that upcoming street. It looked at first glance like a human, a living breathing human, but it turns out to be a statue of a human. Obviously, that’s quite different.

Why is it different? The odds are that you aren’t expecting the statue to suddenly move along and try to cross the street. If it was a human standing there, you’d be watching to see what the human is doing. Are they looking as though they want to cross the street? Is the human merely standing or starting to walk or run? How far from the street is the human? If the human runs versus walks, how soon might they appear in the street?

Because of the somewhat predictability of objects that we might see while driving, it is possible to use Machine Learning or Deep Learning to try and prepare the image processing to be able to detect objects that are being seen by the camera. We can feed hundreds, maybe thousands upon thousands of images of pedestrians into a deep or large-scale multi-layer artificial neural network, and try to get it to pattern match on those images. You want the pattern matching to be general enough to detect a wide set of variations, rather than becoming fixed on particular shapes or sizes.

Once the image processing has been trained, we’d put it into the on-board AI system of the self-driving car and have presumably tuned the image processing so that it can work quickly. Recall that I earlier emphasized the importance of time and the speed of processing. Having a great image processing system based on Deep Learning won’t do much good if it takes say 5 seconds to identify a potentially life-threatening object and for which during those 5 seconds the self-driving car has proceeded ahead unabated and rams into the object.

For how the Uber incident in Phoenix provides an example of this timing aspect, see: https://www.aitrends.com/selfdrivingcars/initial-forensic-analysis/

For more about the Uber incident, see my article: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

For my article about plasticity needed in Deep Learning, see: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

The importance of benchmarking in Machine Learning is vital, see my article: https://www.aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

The images streaming into the cameras of the self-driving car are likely to contain many objects. Think about the times you’ve driven in a downtown area at rush hour, such as in New York City or downtown Los Angeles. You might have many dozens of pedestrians. There might be animals such as people walking their dogs. There are cars to your left, cars to your right, cars behind you, cars ahead of you. A delivery person might be pushing a cart that contains delivery packages.

It is chaos!

Not only do you need to discern those objects, you also need to identify where the buildings are, where the curbs of the street are, and so on. There might be trees along the side of the street. There could be fire hydrants. A slew of objects is in that scene. If you’ve ever been with a novice teenage driver as they for the first-time drive in a busy downtown area, you can nearly see their eyes pop out of their head and their head explode as they try to notice and keep track of the myriad of objects.

The AI system on-board the self-driving car has to do the same.

Object Classifications

One means to try and cope with the complexity of the scene involves classifying objects.

The image processing tries to determine that the object standing on the street corner is a human, a pedestrian, and classifies the object as such. This gets posted into an overall model of what the surroundings consist of. The model, a virtual model of the real-world, provides a kind of mapping of what objects there are, along with what they are, and what they might do.

Based on the virtual world model, the AI action planning portion of the on-board system will assess the situation and try to determine what actions to have the self-driving car undertake. If the virtual world model indicates that a car is directly ahead of the self-driving car, and the brake lights are on, and the car ahead is braking and the AI self-driving car is coming rapidly upon the stopping car, the AI action planner has to ascertain what to do.

The AI action planner examines the virtual world model and tries to determine what action makes the best sense to initiate. Maybe the self-driving car can swerve around the car that’s stopping. This requires examining the virtual world model to see what is to the right and left of the car ahead. Are there pedestrians standing there? Is there a car in the way? These and a variety of what-if scenarios need to be rapidly explored.

Time is again crucial. If the AI action planner looks at twenty different what-if scenarios, it could use up so much time that the opportunity to swerve is now gone anyway. Keep in mind that the AI action planner needs to emit commands to the self-driving car controls, and those controls need to receive the commands and then undertake the physical actions indicated, such as giving gas to the accelerator or turning the steering wheel. That takes time. Suppose it takes 3 seconds to get the swerve underway, but it has taken meanwhile 2 seconds to decide upon the swerving action, it might be too late to further consider the swerving as viable.

Imagine that a bike rider is also involved in the scenario of having to decide what to do about a car that ahead is unexpectedly coming to a stop. Maybe one option for the AI is to swerve the self-driving car into the bike lane and squeeze between the car that is stopping and a bike rider that is in the bike lane. Is there enough room there to fit into that tight space?

This brings us back to the earlier discussion about Bounded Volumes.

The image processing is usually established to not only try and find objects in the scene, but also classify those objects and appoint a Bounded Volume or virtual container to the object. The car ahead that is stopping will have been assigned a Bounded Volume associated with that car, depending upon the dimensions and size of the car. Likewise, the bike and the bike rider, which we’ll say are one object consisting of two things, we’ll assign a Bounded Volume or virtual container that encompasses that bike rider and bike.

Why assign these make-believe containers to the objects?

Well, as mentioned, suppose the AI is having to try and decide whether to slip between the stopping car that’s ahead and the bike rider that’s in the bike lane. The self-driving car itself has its own Bounded Volume, which it should already be familiar with, and it needs to try and calculate whether the Bounded Volume of the self-driving car and fit between the Bounded Volume of the car ahead and the Bounded Volume of the bike rider.

To visualize this, consider cubes or cuboids for each of these objects. We have a cuboid representing the self-driving car. We have a cuboid representing the car ahead. We have a cuboid representing the bike rider. Via the camera, let’s assume we can gauge the distance that’s between the right side of the car ahead and the left edge of the bike rider. The width of the self-driving car has to be able to fit into that distance, if there’s any chance of sliding to the right of the stopping car.

In the virtual world model, we’d have represented the Bounded Volumes of the car ahead, the bike rider, and the self-driving car. Based on the virtual world model aspects, the AI tries to ascertain whether the self-driving car can fit into the gap between the car ahead Bounded Volume and the Bounded Volume of the bike rider, as based on the Bounded Volume of the self-driving car.

Though you might think this is an easy geometry problem and involves easy mathematics, I’d like to point out that there’s a lot more to this decision making. The car ahead is a Bounded Volume moving at a particular rate of speed and going in a particular direction. Same is said for the self-driving car. Same is said for the bike rider. They are all in-motion. This means that making predictions will involve uncertainty about what will happen next.

For uncertainties and probabilistic reasoning, see my article: https://www.aitrends.com/selfdrivingcars/probabilistic-reasoning-ai-self-driving-cars/

Suppose the bike rider opts to suddenly swerve to their left, shortening the gap between them and the car ahead. Suppose the car ahead opts to swerve toward the bike rider, shortening the gap. The world is not stationary. The virtual world has to account for the motion of objects. This motion is not guaranteed to continue in any straightforward manner. The objects can likely alter their path, even if it maybe doesn’t make logical sense for them to do so.

In this case, it might not seem logical that the bike rider would want to swerve toward the car that’s ahead. Dumb move! We don’t know that this really is a dumb move, since there might be something else happening related to the bike rider. Suppose a pedestrian holding a dog on a leash has accidentally let the dog go, and the dog is rushing onto the street. The bike rider, seeing the rushing dog, decides to try and swerve to the left of the dog, not perhaps realizing that the car adjacent to them is stopping and might opt to swerve into the bike lane.

In the virtual world model, did the AI system have the pedestrian modeled and the dog modeled? Maybe the image processing could not detect those objects, perhaps the pedestrian and dog were obscured by some other objects like a light pole. In that case, the AI action planner is “blind” to the notion that a pedestrian is standing there with a dog and that the dog had gotten loose.

Bounded Volumes And The Virtual World Model

Notice that this brings up numerous facets about the Bounded Volumes and the virtual world model.

If there is not a representation in the virtual model of real-world objects because those real-world objects were not detected by the image processing, it means that the AI system is having to cope with a virtual world that is not an accurate depiction of the real-world. The AI action planner can end-up making life-or-death driving decisions based on this omitted information, perhaps leading to a calamity.

Suppose the image processing wasn’t sure what the clump was near the corner and it consisted of the human walking the dog, which could be detected on the other side of a light pole, and the image processing could discern that something was there, a kind of unidentifiable blob. The image processing might not be able to classify it, but at least it could tag in the virtual world model that there’s an object there, along with imputing a Bounded Volume that encapsulates the blob.

Once the dog gets loose, this would hopefully be detected by the cameras, and the image processing might now be able to discern that there is a pedestrian there and a dog there, two separate objects, and had been the overall blob that had earlier been posted at that location. The virtual world model would then be updated accordingly.

Why does it matter that the dog is now a Bounded Volume of its own? A dog as a classification means that you can predict various aspects of its behavior. A dog can move in certain ways at certain speeds. A human can move in certain ways at certain speeds. The classification of an object helps to anticipate what the object might do. It aids the AI action planner in deciding upon the action to be taken.

Let’s pretend for a moment that the dog is instead a skateboard that the pedestrian has let loose and is rolling into the street. I say this because of what I am about to suggest. If the AI action planner has to calculate what might need to be hit in order to save the life of the bike rider, I think we’d all agree that hitting the skateboard would be preferable. This highlights the kind of ethical choices that the AI system needs to make, doing so in real-time.

For my article about the international aspects of ethics, see: https://www.aitrends.com/selfdrivingcars/global-moral-ethics-variations-and-ai-the-case-of-ai-self-driving-cars/

For the potential of ethics review boards, see my article: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/

For the possibility of pedestrian roadkill, see: https://www.aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

For the need of defensive AI driving, see my article: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

Getting It Right Or Problems If Wrong

The Bounded Volume that you craft to represent an object needs to be large enough to accommodate the overall object and yet not so large that it unnecessarily inflates the size of the actual object. In the case of trying to squeeze between the bike rider and the car ahead of the self-driving car, if the BV for the bike rider is overly large it might deny the possibility of fitting between the bike rider and the car ahead, even though in reality it might have been feasible.

On the other hand, if the BV is overly skintight, it is possible that a miscalculation could occur of striking the real-world object by having gotten too close to it. This goes back to the earlier point about arriving at a Goldilocks size for the BV in terms of representing the real-world object.

One of the most essential aspects of the virtual world model involves discerning whether there is going to be an intersecting of two or more Bounded Volumes. For example, if the Bounded Volume or virtual container representing the self-driving car is going to intersect with the Bounded Volume or virtual container of the bike rider, this needs to be anticipated and dealt with by the AI action planner.

If two or more Bounded Volumes are anticipated to intersect, the possibility of a collision arises. I’ll use the word collision to refer to a circumstance of two objects that actually brush against each other or ram into each other, physically doing so. When I use the word intersecting, it means that there is the potential for an actual collision, though it is not necessarily a collision per se. This depends upon the amount of inflated size that we have for the Bounded Volumes involved.

Suppose the bike rider is represented by a BV that is twice the actual size of the bike rider. This implies a kind of virtual buffer or cushion, allowing for wiggle room of the real-world object within the imaginary container that we’ve concocted. A self-driving car entering into the Bounded Volume of the bike rider might not actually hit the bike rider, since there’s that extra space within the virtual container. In a sense, the inflated size of the BV can provide a margin of error into which another BV can wander and yet not actually touch the real-world object therein.

I’m sure that you likely think about your own car in the same way. You are apt to have a sense that there is an outer boundary of a few inches beyond the actual size of your car. When you try to park your car into a rather confining parking spot, you are subliminally aware that you have a little bit of a cushion of space. As you get closer and closer to another parked car, you begin to sweat about whether you are undermining that cushion and getting so close that you’ll rub against or scratch the other car.

This also brings up the earlier topic about overhanging or protruding aspects of your car.

If you have an antenna on your car that goes upward a few feet, above the roof height of your car, you often tend to ignore the antenna as being part of a kind of Bounded Volume, under the belief that if the antenna hits something it will just do so lightly and spring back. On the other hand, your side mirrors typically have to be included in your sense of a Bounded Volume due to their possibility of breaking off or damaging another car if you rub against the side mirrors while parking your car (many cars today have rejiggered the side mirrors to pivot inward when touched, allowing some flexibility in their positioning).

Remember too my story of the car that had poles dangling from the rooftop of the car. What do you think the Bounded Volume of that car should be? You could make the Bounded Volume consist of the car only and not include the added aspect of the dangling poles, but this means that the AI self-driving car might not calculate the possibilities of hitting those dangling poles. If you make the Bounded Volume to include the dangling poles, the overall size of the BV is larger and might preclude the AI from realizing that it might be able to fit underneath those poles, which recall that the sports car in my story was able to do so.

Along those lines, the other aspect is whether the Bounded Volume is a regular shape such as a cube or cuboid, or whether you want to have it be an irregular shape. By using an irregular shape, you could potentially encompass the car that has the dangling poles and then also encompass the dangling poles, but not have one overarching cube or cuboid that tries to do so. The shape would be tailored to the contours of the actual object.

The problem of having an irregular shaped contour will be the added mathematical and computational effort involved in dealing with the BV in the virtual world model. Time is crucial and making the BV into an irregular shape is going to cost you in terms of added computer processing time. There is a tradeoff involved between using the simpler shapes which are computationally less expensive versus using a complex shape that chews-up computational processing time.

There are various algorithmic tricks that you need to employ when trying to make the virtual world model as fast as possible.

For example, when trying to determine whether two or more Bounded Volumes are going to intersect, you can use the Separating Axis Theorem (SAT), a quick method to determine the minimum penetration vector involving two BV’s. Essentially, you mathematically try to see if there is a line or plane that can fit between two Bounded Volumes, and if so, the two are not yet intersecting.

On-Board Versus In-The-Cloud

One aspect that I often get asked about at conferences involves whether the virtual world model and its use of Bounded Volumes as virtual containers representing real-world objects can be calculated outside of the self-driving car such as in the cloud of the automaker or tech firm.

You would normally expect that all of these calculations have to take place in the on-board AI system of the self-driving car, since the timing aspects are so vital. That’s why AI self-driving cars are chock-full of the fastest processors and tend to require a large amount of computer memory on-board.

The key problem with trying to place the virtual world model into the cloud would be the latency involved in conveying the matter to and from the cloud by the AI self-driving car. The amount of time involved in shoving up to the cloud data from the AI self-driving car that came from the sensors, and the amount of time to get a result transmitted back down to the self-driving car, chews up so much time that you might not be able to have the AI self-driving car acting in a timely manner.

Usually, any kind of OTA (Over-The-Air) aspects of the AI self-driving car is going to be done for matters that are not time crucial per se. At the end of a day of driving, for example, the AI self-driving car might push up to the cloud the days’ worth of collected data, and meanwhile the cloud might be pushing down into the AI self-driving car the latest patches and updates. There are hopes that with the advent of 5G, along with edge computing, perhaps these aspects can take place in a more real-time way, though it is still seemingly unlikely to happen fast enough to be in-the-loop during actual driving activities.

Some have proposed a hybrid approach of having a virtual world model within the on-board system and a mirrored version in the cloud. The cloud-based version would be used to explore “longer term” time-frame actions of the AI self-driving car, such as examining what’s taking place some distance ahead of the self-driving car. Meanwhile, the on-board AI system is focusing on the more immediate tactical aspects. This splitting of the effort allows the exploiting of say exascale supercomputing power at the cloud level and can provide an added boost to what the on-board AI is able to undertake.

For more about OTA, see my article: https://www.aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For my article about 5G, see: https://www.aitrends.com/selfdrivingcars/5g-and-ai-self-driving-cars/

For the use of exascale supercomputers and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/exascale-supercomputers-and-ai-self-driving-cars/

For my article about edge computing, see: https://www.aitrends.com/selfdrivingcars/edge-computing-ai-self-driving-cars/

Another aspect about the virtual world model involves the use of Machine Learning or Deep Learning.

Much of the virtual world model efforts are primarily logic-based and involve coding up the calculations and predictions that come from the virtual world model. Interestingly, trying to discern patterns of driving behaviors is possible by leveraging the virtual world model. By collecting data about the virtual world model over time, you can use Deep Learning to identify traffic and driver behaviors, which can then be used for improving the predictive capabilities of the AI system.

Conclusion

There used to be an advertising campaign about how plumbers were the unseen and unheralded heroes of making sure that your house pipes and water were flowing right. Most people don’t put much thought toward their plumbing, other than when it breaks, and water is spilling onto their floors. You just assume that the plumbing is done right and until or if it goes on the fritz, you aren’t concerned about it.

The virtual world model and the Bounded Volumes in AI self-driving cars are similar to the plumbing and plumbers. You don’t see much attention provided to these elements. Instead, all the glory seems to go toward the sensors. There are weekly news updates about how a new LIDAR sensor is better than another one, or how a camera for AI self-driving cars has come out that has better vision capabilities than present ones. Sensors, sensor, sensors.

As hopefully is now evident, if you have a lousy virtual world model, or at least one that is not timely and well-tuned, it won’t matter how good the sensors are, since it’s going to be a mess of knowing what surrounds the self-driving car, and the AI action planner will be blind or thrown a kilter by a lack of having a proper and timely indication of the surroundings. Likewise, if the Bounded Volumes are poorly representing the real-world objects, the AI action planner won’t be able to discern what makes sense in terms of avoiding objects and also anticipating the intersecting of objects.

For those AI developers that toil away in dealing with this kind of plumbing, let’s herald them for their efforts. Though perhaps unseen and not appreciated, there is an impressive amount of complexity and incredible effort involved in these matters, plus much room left to advance these capabilities, which ultimately will make-or-break the advent of safe and successful AI self-driving cars.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.



Source link

Related posts

For Driverless Car Test Tracks, It Helps to Make the Signs Clear

blockchaingrade

Boxes-on-Wheels and AI Self-Driving Cars

blockchaingrade

AI Alien Limb Syndrome and Autonomous Self-Driving Cars

blockchaingrade

Leave a Comment