This will be exclusive here for the usual three days and will be set to public at the end of that time. It will appear on sabarton.com on 2 April 16.
So, about the actual post: someone tampering with a self-driving car's software/firmware is already a much-discussed concern. Most of the articles I've seen have explored the possibility of using the WiFi access point of the vehicle itself to access and "hack" the vehicle, and my understanding is that this has already been done in at least one controlled experiment. A hacker could lock or disable the brakes, affect the steering, cause the vehicle to see "ghost" vehicles or become blind to actual vehicles or pedestrians, ignore speed limits, ignore traffic signals (which, once self-driving vehicles become ubiquitous, will likely be 'visible' only to the vehicles themselves in areas closed to manually driven vehicles), or... well, you get the point. The possibilities are extensive.
And, of course, there are other possibilities that come to mind. They'll have to be dealt with as well as possible, just like the hacking problem, as self-driving vehicles become more common.
Drivers might alter vehicles' software themselves. This will likely be illegal, and will range from harmless to extremely dangerous. Cars that drive themselves will likely have no human-accessible controls like a steering wheel or brake/accelerator pedals; a likely illegal mod would be to provide controls via touchscreen or a videogame-like controller. Mods might allow vehicles to exceed speed limits, open doors while driving, alter pollution controls (looking at you, naughty Volkswagen), flash rude messages to other drivers on a variable-opacity touchscreen windshield, and who knows what. Once the actual vehicles are here, we'll discover all sorts of things we haven't thought of yet, just like just about every other piece of tech we've come up with.
We'll want advertising blockers for our cars by and by, too. I can't imagine advertisers won't be happy to pay to have messages projected and voiced right inside your car as you drive. Imagine how quickly you'll get tired of hearing "would you like to stop at McDonalds?" and "Come shop at Macy's, 20% off all housewares today!" If the advertising deals with automakers get aggressive enough -- and when have advertisers not gotten too aggressive for their own good given the chance -- you may find yourself having to respond to a constant stream of default-yes prompts. "Stop at Taco Bell? Touch CANCEL to decline."
Yes, we'll need CarAdBlock.
And of course, there's the hazard of malware as the story suggests. Imagine your family vehicle being ransomwared right before a crucial work meeting. Or before your holiday dinner gathering, and you're bringing the main dish.
Very likely efforts to thwart malware and illegal mods to vehicle software will be more aggressive than those directed at the same with computers and smartphones. Penalties will be more draconian -- and if they're not, they soon will be after the first few malware, mod, or hacking vehicular injuries or deaths.
But that won't stop some people from creating malware for cars and so forth. There's always someone who wants to ruin the fun.
Some think these and other hazards will prevent the self-driving vehicle from becoming popular. I don't think that's going to be the case at all. We are already willing to accept a MILLION WORLDWIDE DEATHS PER YEAR for our current vehicles. If malware "only" costs a hundred thousand lives yearly, there will be a public outcry. It will slow adoption by the public. But business and government will continue to pursue the option of lesser cost in both cash and lives -- and that will be the self-driving vehicle.