It's interesting that on the "what could go wrong" discussions on HN, the focus has been on the discussion of insurance and events like truck-jacking. Those are the issues of today. I think there's a far more interesting future at play.
These trucks can and will put people out of work. Right now a large majority of the testing of autonomous vehicles is focused on "perfect environments". Yes, Google will say that test driving autonomous cars in San Francisco is far from a perfect environment; however, in a future where people are loosing their livelihoods, ways to hack autonomous vehicles will be explored, and I'm not talking about just hacking a vehicle CAN bus. All of these vehicles rely on technologies like radar, LIDAR, IR vision, etc. None of which are fool-proof and each of which could fall to very specific attacks. I hope that the auto manufacturers are considering this future. After all, we have technologies today like GNU Radio and the HackRF One which have brought what was once the domain of the nation state to the general public.
Farming is actually a great example. There were farmers who did fight back against farm machinery. Sabotage of tractors and equipment was an actual thing here in the US and abroad. You can even find whole groups of anti-tractor types in the American Amish country.
These days it's sabotage against GMO crops. You'll find Monsanto and other companies will place their GMO test fields in regions of the country known to be sympathetic and/or will keep them under armed guard.
But yet everything to do with this topic. The Amish believe one must work in harmony with nature. Tractors as an example threaten this harmony. I'm just waiting for some religion somewhere to state that self-driving vehicles are the work of the devil, or a threat to the Protestant work ethic. There might even be some political aspects of this a they are a signal of the coming "liberal leisure society."
> The Amish believe one must work in harmony with nature.
I don't quite like that description either. I think it's more that they want to encourage interdependence within the community. They are not opposed to technology that they believe doesn't threaten that community dynamic.
80 years is a very long time. It's long enough for two or three generations to grow up, decide what they want to do for their career, notice their dad's career outlook doesn't seem to be on the rise, and go to work in a factory or office.
If the number of truck drivers goes down by 90% in the next 80 years, that would be fine. If it goes down by 90% in the next 8 years, then we'll have a lot of mis-trained ex-truck drivers out there, and that's nothing to scoff at.
According to the US Bureau of Labor Statistics The Average age of current Truck Drivers is 51. Where is a guy, 14 years away from retirement who probably doesn't have a college degree going to go? I suppose he COULD get a loan, and try to get a degree, then pay off that loan and earn a living for another 10 years.
Farmers transitioning is a bad analogy. People live longer, require more education, and have less control over their work life if they aren't tech-savy.
I see 4 possibilities. We have an aging work population 65 and older working entry level skilled jobs to pay off college loans they got later in life. Or we have a larger "welfare" state, we have a better "public higher education" program, or we have more millennials taking care of their boomer parents because their parents couldn't find work, and social security hasn't kicked in yet.
You are under the assumption that 'efficiency' means resource efficiency. That is untrue. Exhibit A is how convoluted our current system is with our non-renewables. Replacing people with machines won't fix that.
Efficiency is celebrating the fact that you can drop your expensive annual costs and the liability of employing humans to conduct your business. People are expensive and demand things like rights and benefits. Machines can be had with a purchase/lease and maintenance costs.
Machines won't negotiate or get a lawyer. That's efficiency.
Meanwhile, we'll strip the earth to build these machines in the name of efficiency while more of the population is made useless and left to rot by an economy that functions without them.
That's an interesting narrative. It is however historically inaccurate. All throughout human history we have replaced tedious, dangerous and repetitive tasks through means other than human labor. Increased efficiency (let's set aside the definition for the moment) has only helped us.
Who is the us "efficiency" has helped? At the end of the day, do I benefit from the fact that Company X's truck no longer needs a driver?
Does that benefit really outweigh ruining the livelihoods of millions of people, whose family now can't pay taxes and lost their ability to see doctors or to maintain a roof over their heads?
When do we get to reap the benefits of this automation?
There is a lot of money being made by letting people go and letting the tax payer pay for the societal damage.
Will those millions of drivers be transferred to a new department within the company? Will they get a severance check from the profits of this automation? Or will unemployment and lack of medical care shave 20+ years off their lives so a shipping company can save a dime that society gets to pay for?
True free market capitalism is the polar opposite of slavery. The only transactions which are allowed are ones in which both parties have consensually agreed to the set terms.
Welfare isn't effective I agree. What do you think about a NIT?
Capitalism does not have a concept of consensual transactions, only transactions. Exploitation of the slave class is something multiple empires have grown out of.
Whenever people CAN exploit (in the name of capitalism, or racial superiority, or any other equally sociopathic concept) they will. Why are we working hard to put our neighbors out of the job in the name of some ideology? It seems a lot like you literally care more about the efficiency of the market than you do giving people a form of sustenance (or income they can barter for sustenance).
Of course the underlying assumption here is that putting people out of work will negatively effect the economy or the well-being of individuals, which isn't necessarily the case. This is currently how our system works (so it will in the short-term), but if we want to collectively progress as a society I think most would agree that the average individual should have vastly more amounts of leisure-time in the future than they do today. Perhaps that means not everyone needs to work.
I believe we're fast approaching the need to re-evaluate how we treat (un)employment and the role of individuals with an imposed obligation towards economic contribution, particularly due to the automation and/or efficiency we gain through technology. Will we always have enough work for everyone to sufficiently provide for themselves without "redistribution of wealth" programs? More importantly, if we force everyone to work to provide for themselves, is the work done actually creating value to society? Is it necessary?
We still have train drivers, even though they're almost, but not quite, redundant. I can't imagine how trucks could be so automated that no driver at all is needed when we don't even do that with trains. Truck drivers will surely still be with us for long enough to retire.
Divide the hourly cost of a human railroad engineer by the total value of the goods they transport by train. Then divide the hourly cost of a truck driver by the total value of goods that can fit into a single truck. The latter ratio is orders greater, and that means there's a much higher economic incentive to replace the truck driver than there ever will be to replace railroad engineers. And I would wager this difference in incentives widens when the amount of damage done by inattentive truck drivers vs. inattentive engine drivers is taken under consideration.
Besides, operators will probably desire a small crew to watch over their trains anyway for the foreseeable future, regardless of whether a human is in the pilot's seat.
I also think it is weird that they don't start with trains.
In Denmark the Metro is operated by a computer, but the whole thing is built around this. The next step to me is not a truck, but a "normal" train that still has to interact with the rest of the train network around it. Then we can move on to trams and then trucks and buses.
Truck drivers probably have a lot more accidents and job performance variability. Also, the railroad engineer's salary is probably an even smaller portion of the load's value than for a truck driver.
It's interesting how many converstations on HN devolve into "This will put people out of work".
Of course it will.
Think of how many in industries related to horses (hay, stalls, cleaning, horseshoes...) were put out of business when the automobile was invented.
Think of the guys that used to drive around and put blocks of ice in your fridge before electricity.
Or countless millions of others that were put out of work when a better/different way was found to do something.
It's fine. Those people will re-train. When there are no more coal plants there will be a lot more people working in the solar/wind business. It's OK, don't panic.
No, people will re-train, but those folks directly put out of work in this instance will not likely turn around and immediately find work as a software engineer maintaining autonomous vehicles. In the long run society retrains, but there is a disruption period that affects real people.
The article reports about this model of truck, "the first to be cleared to drive on US roads," but I think that is a factually incorrect exaggeration, based on other reporting I have read about this same model of truck. A Washington Post report[1] says,
"Daimler’s tractor-trailer, called the Inspiration Truck, relies on cameras and radar to guide itself. It does not use LIDAR, a sensor that is being used by other self-driving vehicle makers such as Google. Like all autonomous vehicles today, the Inspiration Truck has some limitations. It can’t drive on its own during heavy rain, snow or crosswinds above 30 or 40 mph.
"Daimler developed a training system for its drivers that Nevada’s department of motor vehicles approved. It’s fairly simple, teaching them how to activate and deactivate the autonomous driving system. It says it picked Nevada because of how thorough its rules are for autonomous vehicles."
A thorough National Public Radio report[2] also describes this as a one-state, limited program of experiments.
Similar to a train. An operator is present, waiting to take control if necessary. I bet there'll be a dead man's vigilance device[1], just like in the rail industry.
Autonomous trucks will be there probably earlier than autonomous cars. For cars it would be a feature for people, but for trucks you would be able to save tons of money with it.
Where does the saving come from if they still require a driver? From the article, it sounds like the biggest gain is probably in hiring. Possibly also additional savings in fewer accidents. (lower overall fatigue)
Semi-autonomous road trains, assuming - and I admit this isn't a minor thing ;) - only the lead lorry need be occupied, would let you get more lorry-hours per driver hour. This might not actually improve the driver:lorry ratio, but by spreading one driver over multiple lorries you could operate the lorries in shifts with that same ratio.
Say currently one driver can drive 8 hours per day. So if you need to drive for 24 hours, it's going to take 3 days. If you had 3 lorries doing that journey, it would take 3 days.
But say you had your semi-autonomous road train. Now you could do the journey in 24 hours. One driver drives for 8 hours, then the next guy, then the next guy. The two lorries behind just tag along.
Imagine 3 x 25 hour journeys, where the destinations didn't quite match. Previously this would take 4 days. This time: there's 24 hours of shared journey and an extra unshared hour of travel at the end of the route. Now your drivers have to split up at that point. The guy who was driving the road train last will have to wait 16 hours, so his journey takes 24+17 = 41 hours; the guy who was driving before him will have to wait 8 hours, so his journey takes 24+9 = 33 hours; the other guy can jump in his lorry and go straight away, so his journey takes 25 hours.
Actually, why aren't there fully autonomous freight trains on railways yet? We have driverless commuter trains (Docklands Light Railway) why not the same for long distance freight? The infrastructure is there - automated signalling devices in cab, it doesn't seem too big a leap. One hurdle might be unions, I recall ASLEF or someone got upset about the DLR when it first appeared?
How do you synchronize multiple trucks? One company wants their truck to leave at 6pm so something will be available the next morning. Another nearby needs to wait till 6:30 before they're ready to leave. Where's the train?
Perhaps trains would form spontaneously as automated trucks encountered each other? You can't plan it too tightly because one might be late. So it sounds like every truck will still need a driver ready to be the leader or on their own.
Keep in mind that one reason trucks are so much more popular than rail trains is their flexibility. One by one as they're ready and different start and end points each.
But the drivers don't have to be driving. They just need to be awake and alert in case the system tells them to pay attention, so they could be doing some other job as well, like, I dunno, writing code.
Once these trucks are out there in numbers, the financial incentive to eliminate drivers will be so high (and there will be safety data) that the laws will change to allow true driverless.
"In the meantime, several issues still need to be addressed. It is not yet clear how insurance companies might cover self-driving vehicles, for instance, or where blame would be attributed in a road accident."
Who is responsible is going to be the big make / break for all self-driving vehicles. I get the feeling this is going to be very messy.
I don't think that's going to be that big of an issue. From insurers' point of view, the probabilities of where fault might lie are highly predictable from past actuarial data - ie you could make a good guess about what percentage of accidents were caused by commercial vehicles being hit, or by mechanical failure, or by poor maintenance and so on, so as to isolate the driver as a factor. As soon as data is available showing that a computer-based driver is significantly safer than a human one, this is going to be reflected in the cost of insurance and will exert inexorable commercial pressure to automate driving wherever possible and make human control the exception.
It works out OK for everyone; for insurers, a software error is the liability of a large firm with deep pockets, from whom up-front costs can be recovered, as opposed to irresponsible or inadequate human drivers whose personal assets may not cover the loss. From the software vendors' point of view, actuarial data provides very clear targets to shoot for, as well as a lot of data about the financial cost of failures. It's OK to take cost factors into account from an ethical standpoint as long as they're used to maximize safety by efficiently allocating resources, rather than to minimize production cost.
If a licensed mechanical engineer signs off on a set of building plans and the building collapses, they are held responsible. Should software for self driving cars be held to a similar standard?
What is the current liability of software engineers in human life dependent programming? Was there any liability to the persons who wrote the Toyota code ? https://news.ycombinator.com/item?id=9440094
I don't always understand this POV. My current car has radar guided cruise control and auto braking - if these fail, who's responsible? Isn't auto-driving cars the same thing on a larger scale?
Suppose 100% of the responsibility is borne by manufacturers. How is this different from adaptive cruise control or ABS today? For one, the liability cross-section increases dramatically. In 2013, the U.S. auto insurance industry incurred $125 billion in damages [1]. With human error eliminated, this number will shrink. But it will also shift to manufacturers' balance sheets.
What about insurance? Insurance works with un-correlated risks. You mowing down a pedestrian doesn't cause everyone around you to do the same. This keeps insurance companies' instantaneous expected losses manageable. Coding errors and recalls are not un-correlated. If your car has an error, so does–likely–every other car in its run. This concentrates losses and makes them difficult to insure.
There are solutions. But systems will need to be built to allocate risk.
> Coding errors and recalls are not un-correlated. If your car has an error, so does–likely–every other car in its run.
That is assuming that errors are the main cause of insurable incidents and not simply situations for which the software could not compensate beyond stepping on the brakes in the hopes of minimizing impact energy.
Traffic is chaotic and not fully predictable, so there's always some baseline risk that may exceed the risk of non-catastrophic bugs.
You have controls to override the car if they fail and you shouldn't be purely relying on them, so you're at fault. If you don't have any means to override them and you're expected to purely rely on them, surely you can't be at fault because what could you have done?
Given the legal environment in the U.S. where lawsuits are going to be brought, we don't have the case law and law firms are going to go after the biggest pocketbooks. It's not a technical issue, but a social. Look at the early days of cars for a preview.
There's a lot of potential for autonomous vehicles to have a black box that records everything that happened in the case of an accident. It may make it much clearer who is at fault in accidents. With driving being a repetitive task requiring attention and sometimes quick responses, I'm going to bet that in most cases it will be a human at fault.
You don't have to have 'at fault' on order to have insurance.
Insurance responds to the rules of the game. If the rules of the game are that damages from autonomous vehicles are capped, or that multi-vehicle collisions automatically share costs, then the cost of the insurance premium will adjust to that reality.
It takes a leap of thinking to get away from the current obsession with precisely allocating fault in vehicle accidents, but it's entirely possible to do.
Out of all the issues, insurance is the easiest one to solve.
Why not simply agree on a certain accident-rate with the insurance company and they cover anything that stays within that rate as long as there isn't any gross negligence - which would be pretty hard to prove, considering that the software can be audited in advance - or malicious behavior (kill 10 kids every christmas as long as we stay within the agreed rate).
I mean smoothing out low-incidence, high-impact events is pretty much their business, no?
But insurance companies are founded on the assumption that they will only have to pay out in the exceptional case, no?
If they can know for a fact that a certain accident rate is guaranteed for a given car, then they will explicitly disclaim liability for accidents within that rate, similar to how all product warranties expire before the far end of the bathtub curve.
Otherwise it's like selling flood insurance on a flood plain, or health insurance to cancer patients: something that makes no economic sense and only exists where the government has forced it to.
> But insurance companies are founded on the assumption that they will only have to pay out in the exceptional case, no?
"Exceptional case" is still a well-defined rate for them. They calculate risks and adjust insurance premium accordingly. That's pretty much their business.
With sufficiently large amount of insured parties those "exceptional events" are actually fairly common.
And that's what I was referring to. If the insurance company is in the business of calculating incident rates anyway they might as well cooperate with the autonomous car manufacturer to drive down incidence rates to a covered level.
Just like car insurances will offer reduced premiums to experienced, accident-free drivers they can offer reduced fees to vehicle manufacturers who statistically cause fewer accidents.
Everything has a guaranteed failure rate higher than 0, so is all insurance unprofitable?
The warranty example is bad because warranties end instead of getting renewed. They'd happily renew the insurance for the cost of a new product minus epsilon.
No they won't; they'll be happy to cover it because they'll know who to recover from. At bottom insurance is a cash-flow management business. They know very well what the probabilities are and build those into the price.
There are a raft of unaddressed safety and licensing issues to verify that autonomous navigation feedback systems can sense danger and perform evasive, ingenious maneuvers sufficiently to preserve life and property. This is one of the reasons why incremental features are less risky than going all in to uncharted waters.
If trucks become completely autonomous what will prevent someone from standing in front of one causing it to stop and then robbing it?
Unlike with human drivers that may not stop if they believe they are in danger of getting carjacked I assume an autonomous system will be required to avoid hitting a human at all cost.
The same thing that stops people from lobbing rocks through your windows or hammering your head walking down a street.
Everyone is always a few seconds away from having access to terrible crimes.
The mere introduction of new forms of crime does not necessarily lead to an increase in criminality. Nor does reactive policies to specific criminal narratives necessarily reduce it.
In other words, it's going to happen, just like every other crime already does. Existing infrastructures of human society can easily facilitate this new form of misconduct.
I think having to rob a human is a significant barrier, that would be eliminated. A human is not going to stop as easily, can drive evasively, is a potential witness, can call the police almost immediately, etc. Bringing an automated truck to a stop would be comparatively simple. Many criminals are not that smart - if you lower accessibility barriers to high-reward crimes you will see more of them.
I believe accessibility may increase the frequency someone attempts a crime but not the number of people who will engage in it.
Think about grocery stores ... many leave items outside the store. You're expected to inconveniently walk in and pay for it.
Yet do you see people rampantly ripping off those potted plants or cantaloupes? Although that crime is highly accessible it doesn't happen.
Now if we were to leave cigarettes out there, of course it would disappear. But the question here is whether there would be more thieves or just more thievery from the same individuals?[1]
The theory is that certain people have relationship dynamics to society which in their mind, justifies these actions. This theory supports looting grocery stores during some catastrophic natural disaster. Under this theory, if you can change the dynamics by which someone identifies with society, you'd see fewer issues.
But this is mere speculation. Is there any literature on this?
----
1. In the lens of this theory, one may claim looting cigarettes is part of their terrible nicotine addiction. This unfortunate narrative constitutes an (unjustifiable) reason to steal the item. The idea "I'm a screw up so this is the role I play in society" is a very Zimbardo-prison way of looking at things - but I think it's unfortunately true.
An autonomous vehicle is an even better witness, even if it only has the sensors it needs for driving, and can also call for help immediately when it stops due to a person on the highway.
There are laws against robbing trucks. I guess they will stay about as effective as they are now; it is not as if there aren't ways for would-be robbers to stop a truck now (for example by crashing a stolen car in front of it)
As others said, the truck might call 911 before it has come to a stop. Also, 911 operators might get a remote override switch that allows them to instruct the truck to take some evasive action.
I do expect additional laws in the area of autonomous cars, but not to prevent robberies. If autonomous cars become really, really, good and ubiquitous, there is no reason for pedestrians to watch out for cars when crossing a street. That will lead to some funny, but disruptive (for the flow of road traffic) videos on YouTube (yes, you can play Frogger on a 8-line highway with your eyes closed and survive. The crocs aren't robots yet, though)
> yes, you can play Frogger on a 8-line highway with your eyes closed and survive.
I wouldn't be so sure about that. I don't know about the US, but here in Germany the autobahn is supposed to be off-limits for people (and vehicles with a design speed below 60km/h for that matter).
This is obviously to allow drivers to drive with the assumption that they don't have to be ever-vigilant about cyclists or children crossing the street.
Autonomous vehicles will likely operate on the same assumptions to behave more optimally given the constrained environment, i.e. drive in a more aggressive manner than they would do in a city.
I.e. strolling at moderate pace over a highway would probably have about the same risk as hiding ducked behind a parked car to jump in front of a bus in the city.
Both are situations that no sane person should engage in and both are considered acceptable risks not worth defending against.
Cars on the autobahn will not want to hit deer, people getting out of their car while parked on the emergency lane, or people walking on the road to place a warndreieck (http://de.m.wikipedia.org/wiki/Warndreieck) warning for their stranded car, a few hundred meters on, either.
And I expect that what constitutes an acceptable risk will go down and down, as long as technology allows for it. It's already happening al the time. Seat belts are compulsory, air bags are compulsory, ABS is compulsory, etc.
But yes, the frog get thing is not something I would advise to do. On the other hands certainly, in cities, it will be much, much safer for pedestrians to step out into traffic without paying much attention to that traffic. That will affect how pedestrians behave.
> And I expect that what constitutes an acceptable risk will go down and down, as long as technology allows for it.
Well, I do think that the risk of pedestrians on the autobahn isn't something we optimize for.
In the city? Yes.
Reducing the risks of occupants? Yes.
Driving defensively on the autobahn because someone might walk into the traffic? No.
The potential optimizations for the restricted environment that will be made there will be for increased traffic flow, more flexible lane switching or things like that. Maybe even saving fuel by driving in the tailwind of another vehicle.
Autonomous vehicles will follow whatever assumptions we give them. Given that they will, for the first decade at least, largely be operating alongside human-driven vehicles, chances are they will have exactly the same constraints.
I'd be surprised if autonomous vehicles are allowed to drive faster than manual vehicles any time soon.
If autonomous cars become really, really, good and ubiquitous, there is no reason for pedestrians to watch out for cars when crossing a street.
The laws of physics will still impose lower limits on stopping distances, and pedestrians who willfully cause abrupt stoppages resulting in juries or property damage will have the book thrown at them - by that stage most vehicles are likely to have multiple cameras built in that will help identify the perpetrators of such acts.
Also, as pointed out elsewhere, vehicles will probably call 911 automatically as soon as the probability of an accident spikes.
Unless a whole group of people starts crossing the road all at the same time and holding hands (in which case, well, you can't stop people from committing mass suicide I guess), even a very rapidly moving car should be able to swerve out of its way to avoid a pedestrian walking across a motorway, so long as it detects said pedestrian at a sufficient distance.
If you assume that the self-driving cars talk to each other, then each has access to the sensors of the cars far ahead in front, so even in busy traffic they should be able to detect a pedestrian standing by the side of the motorway and start adjusting their flow accordingly, well ahead of any drunken stumbling by said pedestrian.
Unlike human drivers, cars can adjust their trajectories at high speed with minimal risk of accident.
That's true, but if swerving means sideswiping the car that is next to you then it's going to be more logical to slam o the brakes. It depends on the density and speed of traffic. My view is a bit biased as I live near a busy intersection and so I'm first on the scene for 3-4 collisions a year.
Properly outfitted, a self-driving vehicle will basically be a rolling surveillance unit. You could rob one, but you'd likely have quality HD footage of the entire incident from every angle. Who knows, maybe even precise measurements of their body courtesy of LIDAR.
I expect that insurance companies will eat the cost of hijackings, but push for more effective enforcement of existing laws, leveraging the surveillance capabilities of autonomous vehicles.
You also can't threaten a computer to stop it from immediately calling the police. A human driver might fear for his life and delay contacting the police until the encounter is over, but the computer driver could immediately call the police the moment it thinks it is being hijacked.
more likely, dial in to a call center run by the shipping company who would call in to the police. East Podunk County, Wyoming won't have a system installed to handle robo-calls as legitimate.
- The driving algorithm will need to be capable of causing a 911 call to be made for various reasons, including that
- When there is no human driver, a carjacking is not actually that bad. There is insurance for this kind of problem, and it will be up to insurance companies to figure out if it's even worth attempting to mitigate.
So the robbers wear masks and shoot / paint over / jam the video equipment. They can do this at a distance because there is no risk of hitting a human.
> - The driving algorithm will need to be capable of causing a 911 call to be made for various reasons, including that
So the robbers choose a section of road far away from the nearest police and otherwise conducive at making them difficult to follow.
> - There is insurance for this kind of problem
If it gets to be too frequent of a problem, the insurance will become expensive enough that it will be easier just to keep a human around on more dangerous routes.
> "So the robbers wear masks and shoot / paint over / jam the video equipment."
Doing any of that would create a very distinct MO. You want to avoid that if you want to avoid being caught.
Also, have you seen how small cameras are these days? Maybe you could load up a crop-duster with paint and cover the whole truck, but there is no way you could hope to successfully shoot out every camera an insurance company could stick on a truck.
> "So the robbers choose a section of road far away from the nearest police and otherwise conducive at making them difficult to follow."
A problem not made any worse by driverless vehicles. If anything, you can program a truck to always follow approved routes while with human drivers you have to account for insubordination.
> Doing any of that would create a very distinct MO.
Crime is all about risk management. Sloppy purchases could leave a trail, yes, but video evidence can also leave a trail. One trail is easier to cover than the other: lots of people buy paint and guns, not many rob trucks.
> have you seen how small cameras are these days?
This isn't hide-and-seek. The game is not over for the perp if a shitty miniature camera buried in the truck's frame catches a few blurry frames of a masked man. I was responding to the "just do machine learning on LIDAR data" proposal that got thrown out elsewhere in the thread. Equipment that can reliably identify people who control the physical situation and don't want to be identified is probably expensive and vulnerable. Making it relatively invulnerable makes it even more expensive.
> A problem not made any worse by driverless vehicles
Yes, precisely. They never were a deterrent and "just call 911" was/is not going to stop robbers. The deterrent is sitting in the cab.
> "Crime is all about risk management. Sloppy purchases could leave a trail, yes, but video evidence can also leave a trail. One trail is easier to cover than the other: lots of people buy paint and guns, not many rob trucks."
The sort of MO that I am talking about is "Always shoots out the cameras after stopping the truck, using a rifle" (or whatever the specific details happen to be).
The reason you do not want to establish this sort of MO is that it allows the police/FBI to connect different incidents, different truck hijackings, and figure out that they were all perpetrated by the same gang of criminals.
Once they do this, they can begin analysing where and when you are likely to strike in the future. They can use information such as "This gang was out committing crime on April 15th, June 3rd and July 18th" to narrow down their list of suspects, eliminating people who might be matches for one date/location but not for the others.
> "This isn't hide-and-seek."
When you are talking about shooting out cameras, it really is. I mean, ignoring the hollywood silliness of it, chances are you are going to be in that cameras field of view before you shoot it, and any footage that camera already shot will not be destroyed by the bullet smashing the lense. The LIDAR is really not going to be the primary source of information about the thieves anyway, the cameras will be.
> "They never were a deterrent and "just call 911" was/is not going to stop robbers."
Calling 911 is about catching robbers, not deterring them. The sooner the police arrive on the scene, the more likely the criminals will be caught.
How fast does the average truck hijacking happen? How fast does it happen when the truck that is being robbed has been immobilized? Maybe you can get the trailer off and onto another truck, but that takes time and could be hindered any number of ways. Immobilizing the truck isn't a good idea when there is a human driver that could be harmed, but with a self-driving truck there is no reason why it wouldn't be done.
Why bother? Just set every truck up with a small fleet of drones that watch from a safe distance, and fly back down to the roof of the truck to recharge when the power runs low or somesuch. Of course, the hypothetical robbers could detonate an EMP, but that in itself would be noteworthy and bring law enforcement rushing to the vicinity.
>easier just to keep a human around on more dangerous routes.
On the contrary, payouts for a driver taken hostage, injured, or killed on company business would be immense. Insurance companies would probably be quite happy about crimes taking place without humans getting hurt anymore.
"Save money on the average payout and you save money overall" is a decent argument if and only if the number of payouts remains fixed, which it most certainly won't.
Having a human present is a significant deterrent against robbing in the first place and against liberal use of firearms because if things go south with a human present then the charge is murder. A robotic truck presents no such deterrent. Lower the risk of knocking off a truck, and more trucks will get knocked off.
I'm not saying it can't be solved, I'm just saying that the solution won't be "buy insurance."
Of all the objections I've seen people raise to automated vehicles, this is one of the most ridiculous. What stops such a robbery now? Is there some spate of truck drivers running down would-be thieves that I have literally never heard about?
This was actually a plot point in a little-known B-grade sci-fi movie called "Solar Crisis". There were stretches of highway far from cities with very little traffic aside from the occasional robotic cargo truck, which were implied to have been programmed to stop if any living thing were encountered blocking the road. Gangs of robbers would steal cargo from trucks in isolated areas by tying a cow to a stake in the middle of the road, causing the robotic truck would come to a stop in front of the cow while playing a recorded voice that shouted to "clear the highway". There is a scene where some characters are riding in the back of one of these automated cargo trucks and then have to escape some of these robbers.
The implication is that if some of our roads gradually become devoid of humans, robbing automated vehicles will become a "thing". It doesn't actually sound that far fetched, really. It would presumably be safer for the crooks than robbing a real truck today for three reasons: 1) they wouldn't have to threaten a human (thus lowering the stakes if they get caught), 2) the trucks likely would not be built to shoot back (think of the potential liability for the trucking company if it were), 3) the trucking companies would likely not put much energy into pursuing the criminals, so long as the losses were small in the grand scheme of things.
It's similar to the notion of people shooting down delivery drones and stealing the packages. People will do it, and it will likely be tolerated to some small degree. Even today we already have people stealing packages off doorsteps, although there is occasionally vigilante justice in those cases:
I imagine as soon as it encounters such a situation it'll start streaming video of what's going on over a wireless network and then someone can report this. You don't need a human to be an in-person witness to record the crime.
I would bet that even autonomous trucks would have a human on board, partly to prevent exactly this scenario. They'd play the same roll as night security in a warehouse. No need for them to drive; they could sleep and play video games and do whatever. The truck would alert them if anything unusual happens.
I'm sure it will happen, just like some people rob ATMs. But I don't think it will be trivial to do successfully, and the losses will almost certainly be manageable.
Not disputing that at all, but why are they still truck drivers if there are alternative jobs? And if there are no nice alternative jobs, what are they going to do instead and why aren't they doing that now?
Taxi driving is one of the jobs many immigrants end up doing in a new town. If we automate the taxi, what do we do with all the taxi drivers?
What would they do if their jobs were all eliminated overnight? Would they go get a job that will give them a better life? Probably not, because they would have done that already.
Autonomous vehicles will not show up and take over overnight. My prediction will be close to full saturation(in most areas) in ~50 years. Improving the technology, infrastructure and regulatory hurdles will all be rolled out incrementally.
Care to expand on this? If you mean to imply that this means automating trucking would be a bad thing, I disagree.
According to mainstream economics, jobs are not an externality (although they are during a recession). The total amount of money to go around after an efficiency increasing change, will be greater. With taxation, this means that in theory everyone can be made better off. In practice, the truckers will be worse off, but you could at least arrange taxes so that the percentage of people earning less than X decreased, for all X.
Then the only argument left is that the change is somehow unfair to truckers. But that viewpoint unfairly privileges the status quo.
One thing that is true is that the adjustment period will be hard.
And the adjustment period can be longer than a human lifetime … so those people are just screwed.
That’s no way to go about this. Obviously, not increasing efficiency would be bone-headed, but it is centrally important to create a socially acceptable transition for those (and any people) who are affected by this. That is currently not happening and hasn’t been happening in the past.
The luddites were right. The changes industrialization brought really did suck for them and for them it really didn’t get any better ever, until they died. From that point of view smashing the machines is entirely rational, even if those same machines created tremendous, unimaginable wealth. Obviously I’m not advocating smashing machines here, but, you know, we do have to find workable solutions for the luddites.
But we don't owe it to make sure that no one is ever worse off. There are people who never had jobs in the first place. If a trucker loses is job because of technological change, why is the trucker now entitled to more than the person who never had a job?
My own guiding principle here is that everyone deserves a life in dignity, unemployed or employed. You are of course completely correct that we also suck at insuring that for the unemployed.
I mean, I’m more than happy to consider solutions to this problem that tackle the problem of unemployment as a whole, not just this small part we just considered. I think finding a general solution would definitely be even better and thinking about this problem (e.g. basic income) definitely often tends to consider unemployment as a whole.
Existing welfare addresses the problems of unemployment. Not as well as welfare in other English speaking countries, but it still ameliorates the problem.
My position is that even with the current welfare system, technologies that increase productivity, but make some people unemployed, are still good from a utilitarian perspective. The trickle down effect (including through the existing welfare system), which benefits the entire poor population, makes up for the negative effect on the newly unemployed.
EDIT: as always, I have no problem with downvotes for my other opinions, but it's really sad to see downvotes for economic orthodoxy. If anyone reading this downvoted me, please educate yourself on mainstream economic thinking. You don't want to be remembered as an anti-vaccination advocate.
That is a big understatement. In the current world, you lose your job and you're on your own, save some meager, temporary unemployment. Society does not value the jobless and nor are we very good at enabling lost "worker threads" to retool and take on another, more valuable task. We generally treat those people as a lazy burden. Automation is a potentially unprecedented source of efficiency in many areas, but who will profit from these efficiencies? There will be economic impact from thousands of people losing their jobs, as well.
Does automated factory equipment with a human overseeing its operation and correcting error cases sound like real progress over what came before? If so, why doesn't it sound like progress when we have an automated truck with a human overseeing its operation and correcting error cases?
Though, I agree this is progress, I don't think your analogy is correct here. The progress you're referring to allows 1 person to oversee machines that do several people's work. 1 person = 10 jobs.
With the trucks, you have the same person overseeing his own job, except that it'll be done safer. 1 person = 1 job (safer).
Your analogy would make more sense if the trucks were remotely supervised and 1 person could oversee several trucks via remote connection.
In time, all of that will happen. Right now there is too much (warranted) skepticism that society won't let a driverless truck running around alone. In a few years, after proving that the number of incidents is much less than with human drivers, everybody won't care.
I think it's progress in the same way aircraft autopilot is; it can make operation safer for everyone and protect against some types of operator error such as falling asleep and drifting into oncoming traffic.
There's more to progress than getting rid of jobs. I'll happy pay the same price for human truck drivers/supervisors at the cash register as we do now, or a bit more, if the benefit is lower accident rates and more efficient fuel consumption. I won't save anything directly but it will result in a safer and cleaner place to live.
These trucks can and will put people out of work. Right now a large majority of the testing of autonomous vehicles is focused on "perfect environments". Yes, Google will say that test driving autonomous cars in San Francisco is far from a perfect environment; however, in a future where people are loosing their livelihoods, ways to hack autonomous vehicles will be explored, and I'm not talking about just hacking a vehicle CAN bus. All of these vehicles rely on technologies like radar, LIDAR, IR vision, etc. None of which are fool-proof and each of which could fall to very specific attacks. I hope that the auto manufacturers are considering this future. After all, we have technologies today like GNU Radio and the HackRF One which have brought what was once the domain of the nation state to the general public.