brad's blog

New surveys with growing acceptance levels

Some interesting robocar surveys are out.

Today, a survey conducted by Cisco showed very high numbers of people saying, "yes, they would ride in a robocar." 57% said yes globally, with 60% in the USA and an incredible 95% in Brazil. (Perhaps it is the trully horrible traffic in the big cities of Brazil which drives this number.) A bit more surprising was the 28% number for Japan.

Topic: 

Radio show on Robocars, Monday the 13th at 7pm PDT

I will be a guest on Monday the 13th (correction -- I originaly said the 14th) on a the "City Visions" program, produced by one of San Francisco's NPR affiliates, KALW. The show runs at 7pm, and you can listen live and phone in (415-841-4134), or listen to the podcast later. Details are on the page about the show.

Other guests include Bryant Walker Smith of Stanford, Martin Sierhuis of the Nissan robocar lab and Bernard Soriano from the California DMV. Should be a good panel.

Moonshots, laws, Tesla and other recent robocar news

Here's a roundup of various recent news items on robocars. There are now a few locations, such as DriverlessCarHQ and the LinkedIn self-driving car group which feature very extensive listing of news items related to robocars. Robocars are now getting popular enough that there are articles every day, but only a few of them contain actual real news for readers of this site or others up on the technology.

Topic: 

ESticks -- a standardized quick-swap battery proposal

You've probably noticed that with many of our portable devices, especially phones and tablets, a large fraction of the size and weight are the battery. Battery technology keeps improving, and costs go down, and there are dreams of fancy new chemistries and even ultracapacitors, but this has become a dominant issue.

Every device seems to have a different battery. Industrial designers work very hard on the design of their devices, and they don't want to be constrained by having to standardize the battery space. In many devices, they are even giving up the replaceable battery in the interests of good design. The existing standard battery sizes, such as the AA, AAA and even the AAAA and other less common sizes are just not suitable for a lot of our devices, and while cylindrical form factors make the most sense for many cell designs they don't fit well in the design of small devices.

So what's holding back a new generation of standardization in batteries? Is it the factors named above, the fact that tech is changing rapidly, or something else?

I would propose a small, thin modular battery that I would call the EStick, for energy stick. The smaller EStick sizes would be thin enough for cell phones. The goal would be to have more than one b-stick, or at least more than one battery in a typical device. Because of the packaging and connections, that would mean a modest reduction in battery capacity -- normally a horrible idea -- but some of the advantages might make it worth it.

Quick swap

There are several reasons to have multiple sticks or batteries in a device. In particular, you want the ability to quickly and easily swap at least one stick while the device is still operating, though it might switch to a lower power mode during the swap. The stick slot would have a spring loaded snap, as is common in many devices like cameras, though there may be desire for a door in addition.

Swapping presents the issue that not all the cells are at the same charge level and voltage. This is generally a bad thing, but modern voltage control electronics has reached the level where this should be possible with smaller and smaller electronics. It is possible with some devices to simply use one stick at a time, as long as that provides enough current. This uses up the battery lifetime faster, and means less capacity, but is simpler.

The quick hot swap offers the potential for indefinite battery life. In particular, it means that very small devices, such as wearable computers (watches, glasses and the like) could run a long time. They might run only 3-4 hours on a single stick, but a user could keep a supply of sticks in a pocket or bag to get arbitrary lifetime. Tiny devices that nobody would ever use because "that would only last 2 hours" could become practical.

While 2 or more sticks would be best for swap, a single stick and an internal battery or capacitor, combined with a sleep mode that can survive for 20-30 seconds without a battery could be OK.

Anatomy of the first robocar accidents

I have prepared a large new Robocar article. This one covers just what will happen when the first robocars are involved in accidents out on public streets, possibly injuring people. While everybody is working to minimize this, perfection is neither possible nor the right goal, so it will eventually happen. As such, when I see public discussion of robocars and press articles, people are always very curious about accidents, liability, insurance and to what extent these issues are blockers on the technology.

So please consider:

Topic: 

Oh Hugo Awards, where have you gone?

I follow the Hugo awards closely, and 20 years ago published the 1993 Hugo and Nebula Anthology which was probably the largest anthology of currently released fiction ever published at the time.

The Hugo awards are voted by around 1,000 fans who attend the World SF Convention, so they have their biases, but over time almost all the greats have been recognized. In addition, until the year 2000, in the best novel Hugo, considered the most important, the winner was always science fiction, not fantasy even though both and more were eligible. That shifted, and from 2001 to 2012, there have been 6 Fantasy winners, one Alternate History, and 5+1 SF. (2010 featured a tie between bad-science SF in the Windup Girl and genre-bending political science fiction in The City & The City.)

That's not the only change to concern me. A few times my own pick for the best has not even been nominated. While that obviously shows a shift between my taste and the rest of the fans, I think I can point to reasons why it's not just that.

The 2013 nominees I find not particularly inspiring. And to me, that's not a good sign. I believe that the Hugo award winning novel should say to history, "This is an example of the best that our era could produce." If it's not such an example, I think "No Award" should win. (No Award is a candidate on each ballot, but it never comes remotely close to winning, and hasn't ever for novels. In the 70s, it deservedly won a few times for movies. SF movies in the mid and early 70s were largely dreck.)

What is great SF? I've written on it before, but here's an improvement of my definition. Great SF should change how you see the future/science/technology. Indeed, perhaps all great literature should change how you view the thing that is the subject matter of the literature, be it love, suffering, politics or anything else. That's one reason why I have the preference for SF over Fantasy in this award. Fantasy has a much harder time attaining that goal.

I should note that I consider these books below as worth reading. My criticism is around whether they meet the standard for greatness that a Hugo candidate should have.

2312 by Kim Stanley Robinson

This is the best of the bunch, and it does an interesting exploration into the relationship of human and AI, and as in all of Stan's fiction, the environment. His rolling city on Mercury is a wonder. The setup is great but the pace is as glacial as the slowly rolling city and the result is good, but not at the level of greatness I require here.

Topic: 

Oliver Kuttner on Very-Light-Car

Last year, I met Oliver Kuttner, who led the team to win the Progressive X-Prize to build the most efficient and practical car over 100mpg. Oliver's Edison2 team won with the VLC (Very Light Car) and surprised everybody by doing it with a liquid fuel engine. There was a huge expectation that an electric car would win the prize, and in fact the rules had been laid out to almost assure it, granting electric cars an advantage over gasoline that I thought was not appropriate.

A Bitcoin Analogy

Bitcoin is having its first "15 minutes" with the recent bubble and crash, but Bitcoin is pretty hard to understand, so I've produced this analogy to give people a deeper understanding of what's going on.

It begins with a group of folks who take a different view on several attributes of conventional "fiat" money. It's not backed by any physical commodity, just faith in the government and central bank which issues it. In fact, it's really backed by the fact that other people believe it's valuable, and you can trade reliably with them using it. You can't go to the US treasury with your dollars and get very much directly, though you must pay your US tax bill with them. If a "fiat" currency faces trouble, you are depending on the strength of the backing government to do "stuff" to prevent that collapse. Central banks in turn get a lot of control over the currency, and in particular they can print more of it any time they think the market will stomach such printing -- and sometimes even when it can't -- and they can regulate commerce and invade privacy on large transactions. Their ability to set interest rates and print more money is both a bug (that has sometimes caused horrible inflation) and a feature, as that inflation can be brought under control and deflation can be prevented.

The creators of Bitcoin wanted to build a system without many of these flaws of fiat money, without central control, without anybody who could control the currency or print it as they wish. They wanted an anonymous, privacy protecting currency. In addition, they knew an open digital currency would be very efficient, with transactions costing effectively nothing -- which is a pretty big deal when you see Visa and Mastercard able to sustain taking 2% of transactions, and banks taking a smaller but still real cut.

With those goals in mind, they considered the fact that even the fiat currencies largely have value because everybody agrees they have value, and the value of the government backing is at the very least, debatable. They suggested that one might make a currency whose only value came from that group consensus and its useful technical features. That's still a very debatable topic, but for now there are enough people willing to support it that the experiment is underway. Most are aware there is considerable risk.

Update: I've grown less fond of this analogy and am working up a superior one, closer to the reality but still easy to understand.

Wordcoin

Bitcoins -- the digital money that has value only because enough people agree it does -- are themselves just very large special numbers. To explain this I am going to lay out an imperfect analogy using words and describe "wordcoin" as it might exist in the pre-computer era. The goal is to help the less technical understand some of the mechanisms of a digital crypto-based currency, and thus be better able to join the debate about them.

Tags: 

We Robot Robot Law Conference and Robot Block Party

It's National Robotics Week, and various events are going on -- probably some in your area.

Today and Tomorrow I am at the We Robot conference at Stanford, where people are presenting papers puzzling over how robots and the law will interact. Not enough technology folks at this iteration of the conference -- we have a natural aversion to this sometimes -- but because we're building big moving things that could run into people, the law has to be understood.

Topic: 

The Personal Cloud and Data Deposit Box

Last night I gave a short talk at the 3rd "Personal Clouds" meeting in San Francisco, The term "personal clouds" is a bit vague at present, but in part it describes what I had proposed in 2008 as the "data deposit box" -- a means to acheive the various benefits of corporate-hosted cloud applications in computing space owned and controlled by the user. Other people are interpreting the phrase "personal clouds" to mean mechanisms for the user to host, control or monetize their own data, to control their relationships with vendors and others who will use that data, or in the simplest form, some people are using it to refer to personal resources hosted in the cloud, such as cloud disk drive services like Dropbox.

I continue to focus on the vision of providing the advantages of cloud applications closer to the user, bringing the code to the data (as was the case in the PC era) rather than bringing the data to the code (as is now the norm in cloud applications.)

Consider the many advantages of cloud applications for the developer:

  • You write and maintain your code on machines you build, configure and maintain.
    • That means none of the immense support headaches of trying to write software to run on mulitple OSs, with many versions and thousands of variations. (Instead you do have to deal with all the browsers but that's easier.)
    • It also means you control the uptime and speed
    • Users are never running old versions of your code and facing upgrade problems
    • You can debug, monitor, log and fix all problems with access to the real data
  • You can sell the product as a service, either getting continuing revenue or advertising revenue
  • You can remove features, shut down products
  • You can control how people use the product and even what steps they may take to modify it or add plug-ins or 3rd party mods
  • You can combine data from many users to make compelling applications, particuarly in the social space
  • You can track many aspects of single and multiple user behaviour to customize services and optimize advertising, learning as you go

Some of those are disadvantages for the user of course, who has given up control. And there is one big disadvantage for the provider, namely they have to pay for all the computing resources, and that doesn't scale -- 10x users can mean paying 10x as much for computing, especially if the cloud apps run on top of a lower level cloud cluster which is sold by the minute.

But users see advantages too:

Topic: 

Speaking on Personal Clouds in SF, and Robocars in Phoenix

Two upcoming talks:

Tomorrow (April 4) I will give a very short talk at the meeting of the personal clouds interest group. As far as I know, I was among the first to propose the concept of the personal cloud in my essages on the Data Deposit Box back in 2007, and while my essays are not the reason for it, the idea is gaining some traction now as more and more people think about the consequences of moving everything into the corporate clouds.

The rise of the small and narrow vehicle

Many of the more interesting consequences of a robotic taxi "mobility on demand" service is the ability to open up all sorts of new areas of car design. When you are just summoning a vehicle for one trip, you can be sent a vehicle that is well matched to that trip. Today we almost all drive in 5 passenger sedans or larger, whether we are alone, with a single passenger or in a group. Many always travel in an SUV or Minivan on trips that have no need of that.

Topic: 

V2V and connected car part 3: Broadcast data

Earlier in part one I examined why it's hard to make a networked technology based on random encounters. In part two I explored how V2V might be better achieved by doing things phone-to-phone.

For this third part of the series on connected cars and V2V I want to look at the potential for broadcast data and other wide area networking.

Tags: 

Solving V2V Part 2: Make it Phone to Phone

Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.

This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.

But surprisingly it might be something good for car communications too. The people in the ITS community certainly don't think so. They're shocked, and see this as a massive setback. They've invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here's why it might be the best thing to ever happen to V2V.

The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today's mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren't any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn't planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.

Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages -- packages that often don't do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They're even getting ready to cede their "telematics" (things like OnStar) to customer phones.

I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.

The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win?

Tags: 

The importance of serial media vs. sampled and Google Reader

The blogging world was stunned by the recent announcement by Google that it will be shutting down Google reader later this year. Due to my consulting relationship with Google I won't comment too much on their reasoning, though I will note that I believe it's possible the majority of regular readers of this blog, and many others, come via Google reader so this shutdown has a potential large effect here. Of particular note is Google's statement that usage of Reader has been in decline, and that social media platforms have become the way to reach readers.

The effectiveness of those platforms is strong. I have certainly noticed that when I make blog posts and put up updates about them on Google Plus and Facebook, it is common that more people will comment on the social network than comment here on the blog. It's easy, and indeed more social. People tend to comment in the community in which they encounter an article, even though in theory the most visibility should be at the root article, where people go from all origins.

However, I want to talk a bit about online publishing history, including USENET and RSS, and the importance of concepts within them. In 2004 I first commented on the idea of serial vs. browsed media, and later expanded this taxonomy to include sampled media such as Twitter and social media in the mix. I now identify the following important elements of an online medium:

  • Is it browsed, serial or to be sampled?
  • Is there a core concept of new messages vs. already-read messages?
  • If serial or sampled, is it presented in chronological order or sorted by some metric of importance?
  • Is it designed to make it easy to write and post or easy to read and consume?

Online media began with E-mail and the mailing list in the 60s and 70s, with the 70s seeing the expansion to online message boards including Plato, BBSs, Compuserve and USENET. E-mail is a serial medium. In a serial medium, messages have a chronological order, and there is a concept of messages that are "read" and "unread." A good serial reader, at a minimum, has a way to present only the unread messages, typically in chronological order. You can thus process messages as they came, and when you are done with them, they move out of your view.

E-mail largely is used to read messages one-at-a-time, but the online message boards, notably USENET, advanced this with the idea of move messages from read to unread in bulk. A typical USENET reader presents the subject lines of all threads with new or unread messages. The user selects which ones to read -- almost never all of them -- and after this is done, all the messages, even those that were not actually read, are marked as read and not normally shown again. While it is generally expected that you will read all the messages in your personal inbox one by one, with message streams it is expected you will only read those of particular interest, though this depends on the volume.

Echos of this can be found in older media. With the newspaper, almost nobody would read every story, though you would skim all the headlines. Once done, the newspaper was discarded, even the stories that were skipped over. Magazines were similar but being less frequent, more stories would be actually read.

USENET newsreaders were the best at handling this mode of reading. The earliest ones had keyboard interfaces that allowed touch typists to process many thousands of new items in just a few minutes, glancing over headlines, picking stories and then reading them. My favourite was TRN, based on RN by Perl creator Larry Wall and enhanced by Wayne Davison (whom I hired at ClariNet in part because of his work on that.) To my great surprise, even as the USENET readers faded, no new tool emerged capable of handling a large volume of messages as quickly.

In fact, the 1990s saw a switch for most to browsed media. Most web message boards were quite poor and slow to use, many did not even do the most fundamental thing of remembering what you had read and offering a "what's new for me?" view. In reaction to the rise of browsed media, people wishing to publish serially developed RSS. RSS was a bit of a kludge, in that your reader had to regularly poll every site to see if something was new, but outside of mailing lists, it became the most usable way to track serial feeds. In time, people also learned to like doing this online, using tools like Bloglines (which became the leader and then foolishly shut down for a few months) and Google Reader (which also became the leader and now is shutting down.) Online feed readers allow you to roam from device to device and read your feeds, and people like that.

V2V vs. the paths to a successful networked technology (Part 1)

A few weeks ago, in my article on myths I wrote why the development of "vehicle to vehicle" (V2V) communications was mostly orthogonal to that of robocars. That's very far from the view of many authors, and most of those in the ITS community. I remain puzzled by the V2V plan and how it might actually come to fruition. Because there is some actual value in V2V, and we would like to see that value realized in the future, I am afraid that the current strategy will not work out and thus misdirect a lot of resources.

This is particularly apropos because recently, the FCC issued an NPRM saying it wants to open up the DSRC band at 5.9ghz that was meant for V2V for unlicenced wifi-style use. This has been anticipated for some time, but the ITS community is concerned about losing the band it received in the late 90s but has yet to use in anything but experiments. The demand for new unlicenced spectrum is quite appropriately very large -- the opening up of 2.4gz decades ago generated the greatest period of innovation in the history of radio -- and the V2V community has a daunting task resisting it.

In this series I will examine where V2V approaches went wrong and what they might do to still attain their goals.


I want to begin by examining what it takes to make a successful cooperative technology. History has many stories of cooperative technologies (either peer-to-peer or using central relays) that grew, some of which managed to do so in spite of appearing to need a critical mass of users before they were useful.

Consider the rise and fall of fax (or for that matter, the telephone itself.) For a lot of us, we did not get a fax machine until it was clear that lots of people had fax machines, and we were routinely having people ask us to send or receive faxes. But somebody had to buy the first fax machine, in fact others had to buy the first million fax machines before this could start happening.

This was not a problem because while one fax machine is useless, two are quite useful to a company with a branch office. Fax started with pairs of small networks of machines, and one day two companies noticed they both had fax and started communicating inter-company instead of intra-company.

So we see rule one: The technology has to have strong value to the first purchaser. Use by a small number of people (though not necessarily just one) needs to be able to financially justify itself. This can be a high-cost, high-value "early adopter" value but it must be real.

This was true for fax, e-mail, phone and many other systems, but a second principle has applied in many of the historical cases. Most, but not all systems were able to build themselves on top of an underlying layer that already existed for other reasons. Fax came on top of the telephone. E-mail on top of the phone and later the internet. Skype was on top of the internet and PCs. The underlying system allowed it to be possible for two people to adopt a technology which was useful to just those two, and the two people could be anywhere. Any two offices could get a fax or an e-mail system and communicate, only the ordinary phone was needed.

The ordinary phone had it much harder. To join the phone network in the early days you had to go out and string physical wires. But anybody could still do it, and once they did it, they got the full value they were paying for. They didn't pay for phone wires in the hope that others would some day also pay for wires and they could talk to them -- they found enough value calling the people already on that network.

Social networks are also interesting. There is a strong critical mass factor there. But with social networks, they are useful to a small group of friends who join. It is not necessary that other people's social groups join, not at first. And they have the advantage of viral spreading -- the existing infrastructure of e-mail allows one person to invite all their friends to join in.

Enter Car V2V

Car V2V doesn't satisfy these rules. There is no value for the first person to install a V2V radio, and very tiny value for the first thousands of people. An experiment is going on in Ann Arbor with 3,000 vehicles, all belonging to people who work in the same area, and another experiment in Europe will equip several hundred vehicles.

Tags: 

Perils of the long range electric car

You've probably seen the battle going on between Elon Musk of Tesla and the New York Times over the strongly negative review the NYT made of a long road trip in a Model S. The reviewer ran out of charge and had a very rough trip with lots of range anxiety. The data logs published by Tesla show he made a number of mistakes, didn't follow some instructions on speed and heat and could have pulled off the road trip if he had done it right.

Both sides are right, though. Tesla has made it possible to do the road trip in the Model S, but they haven't made it easy. It's possible to screw it up, and instructions to go slow and keep the heater low are not ones people want to take. 40 minute supercharges are still pretty long, they are not good for the battery and it's hard to believe that they scale since they take so long. While Better Place's battery swap provides a tolerable 5 minute swap, it also presents scaling issues -- you don't want to show up at a station that does 5 minute swaps and be 6th in line.

The Tesla Model S is an amazing car, hugely fun to drive and zippy, cool on the inside and high tech. Driving around a large metro area can be done without range anxiety, which is great. I would love to have one -- I just love $85K more. But a long road trip, particularly on a cold day? There are better choices. (And in the Robocar world when you can get cars delivered, you will get the right car for your trip delivered.)

Electric cars have a number of worthwhile advantages, and as battery technologies improve they will come into their own. But let's consider the economics of a long range electric. The Tesla Model S comes in 3 levels, and there is a $20,000 difference between the 40khw 160 mile version and the 85kwh 300 mile version. It's a $35K difference if you want the performance package.

The unspoken secret of electric cars is that while you can get the electricity for the model S for just 3 cents/mile at national grid average prices (compared to 12 cents/mile for gasoline in a 30mpg car and 7 cents/mile in a 50mpg hybrid) this is not the full story. You also pay, as you can see, a lot for the battery. There are conflicting reports on how long a battery pack will last you (and that in turn varies on how you use and abuse it.) If we take the battery lifetime at 150,000 miles -- which is more than most give it -- you can see that the extra 45kwh add-on in the Tesla for $20K is costing about 13 cents/mile. The whole battery pack in the 85kwh Telsa, at $42K estimated, is costing a whopping 28 cents/mile for depreciation.

Here's a yikes. At a 5% interest rate, you're paying $2,100 a year in interest on the $42,000 Tesla S 85kwh battery pack. If you go the national average 12,000 miles/year that's 17.5 cents/mile just for interest on the battery. Not counting vehicle or battery life. Add interest, depreciation and electricity and it's just under 40 cents/mile -- similar to a 10mpg Hummer H2. (I bet most Tesla Model S owners do more than that average 12K miles/year, which improves this.)

In other words, the cost of the battery dwarfs the cost of the electricity, and sadly it also dwarfs the cost of gasoline in most cars. With an electric car, you are effectively paying most of your fuel costs up front. You may also be adding home charging station costs. This helps us learn how much cheaper we must make the battery.

It's a bit easier in the Nissan LEAF, whose 24kwh battery pack is estimated to cost about $15,000. Here if it lasts 150K miles we have 10 cents/mile plus the electricity, for a total cost of 13 cents/mile which competes with gasoline cars, though adding interest it's 19 cents/mile -- which does not compete. As a plus, the electric car is simpler and should need less maintenance. (Of course with as much as $10,000 in tax credits, that battery pack can be a reasonable purchase, at taxpayer expense.) A typical gasoline car spends about 5 cents/mile on non-tire maintenance.

This math changes a lot with the actual battery life, and many people are estimating that battery lives will be worse than 150K miles and others are estimating more. The larger your battery pack and the less often you fully use it, the longer it lasts. The average car doesn't last a lot more than 150k miles, at least outside of California.

The problem with range anxiety becomes more clear. The 85kwh Tesla lets you do your daily driving around your city with no range anxiety. That's great. But to get that you buy a huge battery pack. But you only use that extra range rarely, though you spend a lot to get it. Most trips can actually be handled by the 70 mile range Leaf, though with some anxiety. You only need all that extra battery for those occasional longer trips. You spend a lot of extra money just to use the range from time to time.

Your session has expired. Forgot your password? Click Here!

We see it all the time. We log in to a web site but after not doing anything on the site for a while -- sometimes as little as 10 minutes -- the site reports "your session has timed out, please log in again."

And you get the login screen. Which offers, along with the ability to log in, a link marked "Forget your password?" which offers the ability to reset (OK) or recover (very bad) your password via your E-mail account.

The same E-mail account you are almost surely logged into in another tab or another window on your desktop. The same e-mail account that lets you go a very long time idle before needing authentication again -- perhaps even forever.

So if you've left your desktop and some villain has come to your computer and wants to get into that site that oh-so-wisely logged you out, all they need to is click to recover the password, go into the E-mail to learn it, delete that E-mail and log in again.

Well, that's if you don't, as many people do, have your browser remember passwords, and thus they can log-in again without any trouble.

It's a little better if the site does only password reset rather than password recovery. In that case, they have to change your password, and you will at least detect they did that, because you can't log in any more and have to do a password reset. That is if you don't just think, "Damn, I must have forgotten that password. Oh well, I will reset it now."

In other words, a lot of user inconvenience for no security, except among the most paranoid who also have their E-mail auth time out just as quickly, which is nobody. Those who have their whole computer lock with the screen saver are a bit better off, as everything is locked out, as long as they also use whole disk encryption to stop an attacker from reading stuff off the disk.

Topic: 

Pages