Solving V2V Part 2: Make it Phone to Phone

Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.

This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.

But surprisingly it might be something good for car communications too. The people in the ITS community certainly don't think so. They're shocked, and see this as a massive setback. They've invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here's why it might be the best thing to ever happen to V2V.

The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today's mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren't any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn't planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.

Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages -- packages that often don't do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They're even getting ready to cede their "telematics" (things like OnStar) to customer phones.

I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.

The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win?

Tags: 

The importance of serial media vs. sampled and Google Reader

The blogging world was stunned by the recent announcement by Google that it will be shutting down Google reader later this year. Due to my consulting relationship with Google I won't comment too much on their reasoning, though I will note that I believe it's possible the majority of regular readers of this blog, and many others, come via Google reader so this shutdown has a potential large effect here. Of particular note is Google's statement that usage of Reader has been in decline, and that social media platforms have become the way to reach readers.

The effectiveness of those platforms is strong. I have certainly noticed that when I make blog posts and put up updates about them on Google Plus and Facebook, it is common that more people will comment on the social network than comment here on the blog. It's easy, and indeed more social. People tend to comment in the community in which they encounter an article, even though in theory the most visibility should be at the root article, where people go from all origins.

However, I want to talk a bit about online publishing history, including USENET and RSS, and the importance of concepts within them. In 2004 I first commented on the idea of serial vs. browsed media, and later expanded this taxonomy to include sampled media such as Twitter and social media in the mix. I now identify the following important elements of an online medium:

  • Is it browsed, serial or to be sampled?
  • Is there a core concept of new messages vs. already-read messages?
  • If serial or sampled, is it presented in chronological order or sorted by some metric of importance?
  • Is it designed to make it easy to write and post or easy to read and consume?

Online media began with E-mail and the mailing list in the 60s and 70s, with the 70s seeing the expansion to online message boards including Plato, BBSs, Compuserve and USENET. E-mail is a serial medium. In a serial medium, messages have a chronological order, and there is a concept of messages that are "read" and "unread." A good serial reader, at a minimum, has a way to present only the unread messages, typically in chronological order. You can thus process messages as they came, and when you are done with them, they move out of your view.

E-mail largely is used to read messages one-at-a-time, but the online message boards, notably USENET, advanced this with the idea of move messages from read to unread in bulk. A typical USENET reader presents the subject lines of all threads with new or unread messages. The user selects which ones to read -- almost never all of them -- and after this is done, all the messages, even those that were not actually read, are marked as read and not normally shown again. While it is generally expected that you will read all the messages in your personal inbox one by one, with message streams it is expected you will only read those of particular interest, though this depends on the volume.

Echos of this can be found in older media. With the newspaper, almost nobody would read every story, though you would skim all the headlines. Once done, the newspaper was discarded, even the stories that were skipped over. Magazines were similar but being less frequent, more stories would be actually read.

USENET newsreaders were the best at handling this mode of reading. The earliest ones had keyboard interfaces that allowed touch typists to process many thousands of new items in just a few minutes, glancing over headlines, picking stories and then reading them. My favourite was TRN, based on RN by Perl creator Larry Wall and enhanced by Wayne Davison (whom I hired at ClariNet in part because of his work on that.) To my great surprise, even as the USENET readers faded, no new tool emerged capable of handling a large volume of messages as quickly.

In fact, the 1990s saw a switch for most to browsed media. Most web message boards were quite poor and slow to use, many did not even do the most fundamental thing of remembering what you had read and offering a "what's new for me?" view. In reaction to the rise of browsed media, people wishing to publish serially developed RSS. RSS was a bit of a kludge, in that your reader had to regularly poll every site to see if something was new, but outside of mailing lists, it became the most usable way to track serial feeds. In time, people also learned to like doing this online, using tools like Bloglines (which became the leader and then foolishly shut down for a few months) and Google Reader (which also became the leader and now is shutting down.) Online feed readers allow you to roam from device to device and read your feeds, and people like that.

V2V vs. the paths to a successful networked technology (Part 1)

A few weeks ago, in my article on myths I wrote why the development of "vehicle to vehicle" (V2V) communications was mostly orthogonal to that of robocars. That's very far from the view of many authors, and most of those in the ITS community. I remain puzzled by the V2V plan and how it might actually come to fruition. Because there is some actual value in V2V, and we would like to see that value realized in the future, I am afraid that the current strategy will not work out and thus misdirect a lot of resources.

This is particularly apropos because recently, the FCC issued an NPRM saying it wants to open up the DSRC band at 5.9ghz that was meant for V2V for unlicenced wifi-style use. This has been anticipated for some time, but the ITS community is concerned about losing the band it received in the late 90s but has yet to use in anything but experiments. The demand for new unlicenced spectrum is quite appropriately very large -- the opening up of 2.4gz decades ago generated the greatest period of innovation in the history of radio -- and the V2V community has a daunting task resisting it.

In this series I will examine where V2V approaches went wrong and what they might do to still attain their goals.


I want to begin by examining what it takes to make a successful cooperative technology. History has many stories of cooperative technologies (either peer-to-peer or using central relays) that grew, some of which managed to do so in spite of appearing to need a critical mass of users before they were useful.

Consider the rise and fall of fax (or for that matter, the telephone itself.) For a lot of us, we did not get a fax machine until it was clear that lots of people had fax machines, and we were routinely having people ask us to send or receive faxes. But somebody had to buy the first fax machine, in fact others had to buy the first million fax machines before this could start happening.

This was not a problem because while one fax machine is useless, two are quite useful to a company with a branch office. Fax started with pairs of small networks of machines, and one day two companies noticed they both had fax and started communicating inter-company instead of intra-company.

So we see rule one: The technology has to have strong value to the first purchaser. Use by a small number of people (though not necessarily just one) needs to be able to financially justify itself. This can be a high-cost, high-value "early adopter" value but it must be real.

This was true for fax, e-mail, phone and many other systems, but a second principle has applied in many of the historical cases. Most, but not all systems were able to build themselves on top of an underlying layer that already existed for other reasons. Fax came on top of the telephone. E-mail on top of the phone and later the internet. Skype was on top of the internet and PCs. The underlying system allowed it to be possible for two people to adopt a technology which was useful to just those two, and the two people could be anywhere. Any two offices could get a fax or an e-mail system and communicate, only the ordinary phone was needed.

The ordinary phone had it much harder. To join the phone network in the early days you had to go out and string physical wires. But anybody could still do it, and once they did it, they got the full value they were paying for. They didn't pay for phone wires in the hope that others would some day also pay for wires and they could talk to them -- they found enough value calling the people already on that network.

Social networks are also interesting. There is a strong critical mass factor there. But with social networks, they are useful to a small group of friends who join. It is not necessary that other people's social groups join, not at first. And they have the advantage of viral spreading -- the existing infrastructure of e-mail allows one person to invite all their friends to join in.

Enter Car V2V

Car V2V doesn't satisfy these rules. There is no value for the first person to install a V2V radio, and very tiny value for the first thousands of people. An experiment is going on in Ann Arbor with 3,000 vehicles, all belonging to people who work in the same area, and another experiment in Europe will equip several hundred vehicles.

Tags: 

Perils of the long range electric car

You've probably seen the battle going on between Elon Musk of Tesla and the New York Times over the strongly negative review the NYT made of a long road trip in a Model S. The reviewer ran out of charge and had a very rough trip with lots of range anxiety. The data logs published by Tesla show he made a number of mistakes, didn't follow some instructions on speed and heat and could have pulled off the road trip if he had done it right.

Both sides are right, though. Tesla has made it possible to do the road trip in the Model S, but they haven't made it easy. It's possible to screw it up, and instructions to go slow and keep the heater low are not ones people want to take. 40 minute supercharges are still pretty long, they are not good for the battery and it's hard to believe that they scale since they take so long. While Better Place's battery swap provides a tolerable 5 minute swap, it also presents scaling issues -- you don't want to show up at a station that does 5 minute swaps and be 6th in line.

The Tesla Model S is an amazing car, hugely fun to drive and zippy, cool on the inside and high tech. Driving around a large metro area can be done without range anxiety, which is great. I would love to have one -- I just love $85K more. But a long road trip, particularly on a cold day? There are better choices. (And in the Robocar world when you can get cars delivered, you will get the right car for your trip delivered.)

Electric cars have a number of worthwhile advantages, and as battery technologies improve they will come into their own. But let's consider the economics of a long range electric. The Tesla Model S comes in 3 levels, and there is a $20,000 difference between the 40khw 160 mile version and the 85kwh 300 mile version. It's a $35K difference if you want the performance package.

The unspoken secret of electric cars is that while you can get the electricity for the model S for just 3 cents/mile at national grid average prices (compared to 12 cents/mile for gasoline in a 30mpg car and 7 cents/mile in a 50mpg hybrid) this is not the full story. You also pay, as you can see, a lot for the battery. There are conflicting reports on how long a battery pack will last you (and that in turn varies on how you use and abuse it.) If we take the battery lifetime at 150,000 miles -- which is more than most give it -- you can see that the extra 45kwh add-on in the Tesla for $20K is costing about 13 cents/mile. The whole battery pack in the 85kwh Telsa, at $42K estimated, is costing a whopping 28 cents/mile for depreciation.

Here's a yikes. At a 5% interest rate, you're paying $2,100 a year in interest on the $42,000 Tesla S 85kwh battery pack. If you go the national average 12,000 miles/year that's 17.5 cents/mile just for interest on the battery. Not counting vehicle or battery life. Add interest, depreciation and electricity and it's just under 40 cents/mile -- similar to a 10mpg Hummer H2. (I bet most Tesla Model S owners do more than that average 12K miles/year, which improves this.)

In other words, the cost of the battery dwarfs the cost of the electricity, and sadly it also dwarfs the cost of gasoline in most cars. With an electric car, you are effectively paying most of your fuel costs up front. You may also be adding home charging station costs. This helps us learn how much cheaper we must make the battery.

It's a bit easier in the Nissan LEAF, whose 24kwh battery pack is estimated to cost about $15,000. Here if it lasts 150K miles we have 10 cents/mile plus the electricity, for a total cost of 13 cents/mile which competes with gasoline cars, though adding interest it's 19 cents/mile -- which does not compete. As a plus, the electric car is simpler and should need less maintenance. (Of course with as much as $10,000 in tax credits, that battery pack can be a reasonable purchase, at taxpayer expense.) A typical gasoline car spends about 5 cents/mile on non-tire maintenance.

This math changes a lot with the actual battery life, and many people are estimating that battery lives will be worse than 150K miles and others are estimating more. The larger your battery pack and the less often you fully use it, the longer it lasts. The average car doesn't last a lot more than 150k miles, at least outside of California.

The problem with range anxiety becomes more clear. The 85kwh Tesla lets you do your daily driving around your city with no range anxiety. That's great. But to get that you buy a huge battery pack. But you only use that extra range rarely, though you spend a lot to get it. Most trips can actually be handled by the 70 mile range Leaf, though with some anxiety. You only need all that extra battery for those occasional longer trips. You spend a lot of extra money just to use the range from time to time.

Your session has expired. Forgot your password? Click Here!

We see it all the time. We log in to a web site but after not doing anything on the site for a while -- sometimes as little as 10 minutes -- the site reports "your session has timed out, please log in again."

And you get the login screen. Which offers, along with the ability to log in, a link marked "Forget your password?" which offers the ability to reset (OK) or recover (very bad) your password via your E-mail account.

The same E-mail account you are almost surely logged into in another tab or another window on your desktop. The same e-mail account that lets you go a very long time idle before needing authentication again -- perhaps even forever.

So if you've left your desktop and some villain has come to your computer and wants to get into that site that oh-so-wisely logged you out, all they need to is click to recover the password, go into the E-mail to learn it, delete that E-mail and log in again.

Well, that's if you don't, as many people do, have your browser remember passwords, and thus they can log-in again without any trouble.

It's a little better if the site does only password reset rather than password recovery. In that case, they have to change your password, and you will at least detect they did that, because you can't log in any more and have to do a password reset. That is if you don't just think, "Damn, I must have forgotten that password. Oh well, I will reset it now."

In other words, a lot of user inconvenience for no security, except among the most paranoid who also have their E-mail auth time out just as quickly, which is nobody. Those who have their whole computer lock with the screen saver are a bit better off, as everything is locked out, as long as they also use whole disk encryption to stop an attacker from reading stuff off the disk.

Topic: 

Top Myths of Robocars (and why V2V is not the answer)

There's been a lot of press on robocars in the last few months, and a lot of new writers expressing views. Reading this, I have encountered a recurring set of issues and concerns, so I've prepared an article outlining these top myths and explaining why they are not true.

Topic: 
Tags: 

CES Report, Road tolling and more

I'm back from CES, and there was certainly a lot of press over two pre-robocar announcements there:

Toyota

The first was the Toyota/Lexus booth, which was dominated by a research car reminiscent of the sensor-stacked vehicles of the DARPA grand challenges. It featured a Velodyne on top (like almost all the high capability vehicles today) and a very large array of radars, including six looking to the sides. Toyota was quite understated about the vehicle, saying they had low interest in full self-driving, but were doing this in order to research better driver assist and safety systems.

The Lexus booth also featured a car that used ultrasonic sensors to help you when backing out of a blind parking space. These sensors let you know if there is somebody coming down the lane of the parking lot.

Audi

Audi did two demos for the press which I went to see. Audi also emphasized that this is long-term concept stuff, and meant as research work to enhance their "driver in the loop systems." They are branding these projects "Piloted Parking" and "Piloted Driving" to suggest the idea of an autopilot with a human overseer. However, the parking system is unmanned, and was demonstrated in the lot of the Mandarin Oriental. The demo area was closed off to pedestrians, however.

The parking demo was quite similar to the Junior 3 demo I saw 3 years ago, and no surprise, because Junior 3 was built at the lab which is a collaboration between Stanford and VW/Audi. Junior 3 had a small laser sensor built into it. Instead, the Piloted Parking car had only ultransonic sensors and cameras, and relied on a laser mounted in the parking lot. In this appraoch, the car has a wifi link which it uses to download a parking lot map, as well as commands from its owner, and it also gets data from the laser. Audi produced a mobile app which could command the car to move, on its own, into the lot to find a space, and then back to pick up the owner. The car also had a slick internal display with pop-up screen.

The question of where to put the laser is an interesting one. In this approach, you only park in lots that are pre-approved and prepared for self-parking. Scanning lasers are currently expensive, and if parking is your only application, then there are a lot more cars then there are parking lots and it might make sense to put the expensive sensor in the lots. However, if the cars want to have the laser anyway for driving it's better to have the sensor in the car. In addition, it's more likely that car buyers will early adopt than parking lot owners.

In the photo you see the Audi highway demo car sporting the Nevada Autonomous Vehicle testing licence #007. Audi announced they just got this licence, the first car maker to do so. This car offers "Piloted Driving" -- the driver must stay alert, while a lane-keeping system steers the car between the lane markers and an automatic cruise control maintains distance from other cars. This is similar to systems announced by Mercedes, Cadillac, VW, Volvo and others. Audi already has announced such a system for traffic jams -- the demo car also handled faster traffic.

Audi also announced their use of a new smaller LIDAR sensor. The Velodyne found on the Toyota car and Google cars is a large, roof-mounted device. However, they did not show a car using this sensor.

Audi also had a simulator in their booth showing a future car that can drive in traffic jams, and lets you take a video phone call while it is driving. If you take control of the car, it cuts off the video, but keeps the audio.

Robocars and road charging

Topic: 

The future of the city and Robocar Oriented Development

It's been a while since I've done a major new article on long-term consequences of Robocars. For some time I've been puzzling over just how our urban spaces will change because of robocars. There are a lot of unanswered questions, and many things could go both ways. I have been calling for urban planners to start researching the consequences of robocars and modifying their own plans based on this.

Mercedes cruising S-Class, NHTSA and Google

While there had been many rumous that Mercedes would introduce limited self-driving in the 2013 S-class, that was not to be, however, it seems plans for the 2014 S-class are much more firm. This car will feature "steering assist" which uses stereo cameras and radar to follow lanes and follow cars, along with standard ACC functions. Reportedly it will operate at very high speeds.

Topic: 

Foresight Institute technical conference is Jan 11 in Palo Alto

I'm on the board of the Foresight Institute, which at over 25 years old has been promoting nanotech since long before people knew the word. This January, we will be holding our technical conference on nanotechnology and related fields. Foresight's focus is on the potential for molecular manufacturing -- doing things at the atomic level -- and not simply on fine structure materials.

Nate Silver is Not God and other political musings

In the wake of the election, the big nerd story is the perfect stats-based prediction that Nate Silver of the 538 blog made on the results in every single state. I was following the blog and like all, am impressed with his work. The perfection gives the wrong impression, however. Silver would be the first to point out he predicted Florida as very close with a slight lean for Obama, and while that is what happened, that's really just luck. His actual prediction was that it was too close to call. But people won't see that, they see the perfection. I hope he realizes he should try to downplay this. For his own sake, if he doesn't, he has nowhere to go but down in 2014 and 2016.

But the second reason is stronger. People will put even more faith in polls. Perhaps even not faith, but reasoned belief, because polls are indeed getting more accurate. Good polls that are taken far in advance are probably accurate about what the electorate thinks then, but the electorate itself is not that accurate far in advance. So the public and politicians should always be wary about what the polls say before the election.

Silver's triumph means they may not be. And as the metaphorical Heisenberg predicts, the observations will change the results of the election.

There are a few ways this can happen. First, people change their votes based on polls. They are less likely to vote if they think the election is decided, or they sometimes file protest votes when they feel their vote won't change things. Vice versa, a close poll is one way to increase turnout, and both sides push their voters to make the difference. People are going to think the election is settled because 538 has said what people are feeling.

The second big change has already been happening. Politicians change their platforms due to the polls. Danny Hillis observed some years ago that the popular vote is almost always a near tie for a reason. In a two party system, each side regularly runs polls. If the polls show them losing, they move their position in order to get to 51%. They don't want to move to 52% as that's more change than they really want, but they don't want to move to less than 50% or they lose the whole game. Both sides do this, and to some extent the one with better polling and strategy wins the election. We get two candidates, each with a carefully chosen position designed to (according to their own team) just beat the opposition, and the actual result is closer to a random draw driven by chaotic factors.

Well, not quite. As Silver shows, the electoral college stops that from happening. The electoral college means different voters have different value to the candidates, and it makes the system pretty complex. Instead of aiming for a total of voters, you have to worry that position A might help you in Ohio but hurt you in Florida, and the electoral votes happen in big chunks which makes the effect of swing states more chaotic. Thus poll analysis can tell you who will win but not so readily how to tweak things to make the winner be you. The college makes small differences in overall support lead to huge differences in the college.

In Danny's theory, the two candidates do not have to be the same, they just have to be the same distance from a hypothetical center. (Of course to 3rd parties the two candidates do tend to look nearly identical but to the members of the two main parties they look very different.)

Show me the money?

Many have noted that this election may have cost $6B but produced a very status quo result. Huge money was spent, but opposed forces also spent their money, and the arms race just led to a similar balance of power. Except a lot of rich donors spent a lot of their money, got valuable access to politicians for it, and some TV stations in Ohio and a few other states made a killing. The fear that corporate money would massively swing the process does not appear to have gained much evidence, but it's clear that influence was bought.

I'm working on a solution to this, however. More to come later on that.

Ballot Propositions

While there have been some fairly good ballot propositions (such as last night's wins for Marijuana and marriage equality) I am starting to doubt the value of the system itself. As much as you might like the propositions you like, if half of the propositions are negative in value, the system should be scrapped. Indeed, if only about 40% are negative, it should still be scrapped because of the huge cost of the system itself.

Larry Niven and Greg Benford on "Bowl of Heaven" and Big, Dumb Objects

Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book "Bowl of Heaven." Here's a Youtube video of my session. They did a review of the history of SF about "big dumb objects" -- stories like Niven's Ringworld, where a huge construct is a central part of the story.

Topic: 

Nissan's Self-Parking Leaf

Nissan is showing a modified Leaf able to do "valet" park in a controlled parking lot. The leaf downloads a map of the lot, and then, according to Nissan engineers, is able to determine its position in the lot with 4 cameras, then hunt for a spot and go into it. We've seen valet park demonstrations before, but calculating position entirely with cameras is somewhat new, mainly because of the issues with how lighting conditions vary. In an indoor parking garage it's a different story, and camera based localization under the constant lighting should be quite doable.

Topic: 

Science Fiction movies at Palo Alto Film Festival, and Robocars legal in California

I haven't bothered quickly reporting on the robocar story every other media outlet covered, the signing by Jerry Brown of California's law to enable robocars. For those with the keenest interest, the video of the signing ceremony has a short talk by Sergey Brin on some of his visions for the car where he declares that the tech will be available for ordinary people within 5 years.

Pages