Wikipedia and Knowledge and Lafayette Commons

Food for Thought Dept.

Every once in an while I put up something that is more for chewing on in the context of Lafayette and Fiber than it is on those topics directly. Sunday Thoughts. Food for Thought. Those are the usual tags long-time readers will have noticed. Today the pointer is to a new bit from Kevin Kelly; an intellectual hero of sorts for me.

Kevin Kelly has changed his mind about Wikipedia. It works. Most folks that “knew anything” knew it wouldn’t work. Kelly knew it wouldn’t work. And knew why. He, and they, were wrong. I think a lot of folks have made that admission. But few are as rigorously self-critical as Kelly. He tries to understand which of the assumptions that he brought to the table mislead him—and asks what other judgments of his might be based on those now-disproven assumptions.

His conclusion about Wikipedia:

How wrong I was. The success of the Wikipedia keeps surpassing my expectations. Despite the flaws of human nature, it keeps getting better. Both the weakness and virtues of individuals are transformed into common wealth, with a minimum of rules and elites. It turns out that with the right tools it is easier to restore damage text (the revert function on Wikipedia) than to create damage text (vandalism) in the first place, and so the good enough article prospers and continues. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.

This makes Kelly—who calls himself an individualist with a deeper sense of what that means than most—rethink his individualism and ask if there is a new and desirable sort of community emerging:

The Wikipedia has changed my mind, a fairly steady individualist, and lead me toward this new social sphere. I am now much more interested in both the new power of the collective, and the new obligations stemming from individuals toward the collective. In addition to expanding civil rights, I want to expand civil duties. I am convinced that the full impact of the Wikipedia is still subterranean, and that its mind-changing power is working subconsciously on the global millennial generation, providing them with an existence proof of a beneficial hive mind, and an appreciation for believing in the impossible.

That’s what it’s done for me.

Read carefully this post points to the way that Wikipedia’s basic structure, its architecture, its rules, its algorithmic frame, encourage real, competent, participation and discourage and make inconsequential sabotage and ignorance. You just don’t need a controlling hierarchy if you get the architecture right. It turns out that the “undo” command might be a critical social invention, or at least that’s the way I read it. Maybe that(‘s why we should prefer a digital world. Wanna know what “undo” has to do with it? Read the article. It’s well worth it.)

That’s really interesting. And maybe it’s something that is not only interesting globally but locally—here in Lafayette. We here in this little place will have the monster bandwidth of our generous intranet connection (100 megs or more to all!—locally) and the absurdly cheap storage that comes with our era. What can we do with big storage and unthrottled bandwidth—more what can we do that is worth doing? We on LPF, and the Lafayette Digital Divide Committee, have floated the idea of a Lafayette Commons—a deliberately vague notion about a site that would aggregate information and provide on-network resources to our community. Now our community doesn’t need an encyclopedia…it needs something more focused on local needs, local events, and local, timely knowledge. We need to know what’s going on down the block, who is hot in the local bar scene, what the real skivvy is on the district four councilman’s connections, how to get funding for a new pocket park…and a lot of other things that I can’t but you can imagine. The knowledge and understanding is out there. It is only getting the architecture of making it accessible right that stands in the way of our turning an amazingly fast and cheap local infrastructure into a something really valuable.

And it might be that Wikipedia—and a new generation that thinks Wikipedia is normal—is worth learning from. Kelly remarks:

When you grow up knowing rather than admitting that such a thing as the Wikipedia works; when it is obvious to you that open source software is better; when you are certain that sharing your photos and other data yields more than safeguarding them — then these assumptions will become a platform for a yet more radical embrace of the commonwealth.

What sort of common wealth could we create? If we can just get the architecture right.

Interested?

Net Citizenship and You

Food For Thought: Wouldn’t you rather your master be you?

I’m going to have to lay out an unfamiliar thesis: You, fair reader, are almost certainly not on the internet. Not really. You are a second class citizen who is not allowed to make many of the most basic decisions that full members are free to make; you are a dependent of your modem and the wireline owner it is connected to. Generously: you are a client of AT&T or Cox or ____ (your local duopolist here). Less generously: you are a second class citizen of the internet allowed only the access that Big Daddy allows you. And Big Daddy, as in Tennessee Williams’ play, is more interested in wealth and power than he is the welfare of his dependents.

Full citizenship on the web can be defined simply enough: full citizens can use their connection in any way that they want. They are independent actors who are free to make available or view anything.

That’s not you.

Take a look at your TOS (Terms of Service). Cox and AT&T’s, for instance, do meaningfully differ. But they agree about the essentials that concern us here:

1) You are the client, clients of clients are forbidden; you may not distribute service to others,
2) You can’t talk bad about Big Daddy, (e.g.: Customer is prohibited from engaging in any other activity, whether legal or not, that AT&T determines in its sole discretion, to be harmful to its subscribers, operations, network(s). This includes … or which causes AT&T or the AT&T IP Services to be viewed unfavorably by others.)
3) Free speech? No sucha thing. They get to say what you can say. (e.g.: “Cox reserves the right to refuse to post or to remove any information or materials from the Service, in whole or in part, that it, in Cox’s sole discretion, deems to be illegal, offensive, indecent, or otherwise objectionable.
4) No Free Enterprise. You can’t sell things, for that you need the master’s special permission and a (higher-priced) service, regardless of how much traffic you use,
5) It’s not your connection. “Unlimited, always-on” connections are both limited and subject to an abrupt end. AT&T is bizarrely vague while Cox gives clear limits–which are seldom enforced. It’s not your connection; you need to remember that.
6) Your client status is a privilege, not a right. They can kick you to the curb at any time using whatever rationale seems most useful at the moment. (e.g.: Customer’s failure to observe the guidelines set forth in this AUP may result in AT&T taking actions anywhere from a warning to a suspension of privileges or termination of your Service(s). …AT&T’s decisions with respect to interpretation of the AUP and appropriate remedial actions are final and determined by AT&T in its sole discretion.)

7) Lucky 7 Laigniappe clause: Masters don’t have to follow the rules, only clients. (e.g.: AT&T reserves the right, but does not assume the obligation, to strictly enforce the AUP.)

You are in a master-client relationship with your network provider. You are NOT a full citizen of the internet. Your “location,” your IP address belongs to someone else. They have an assured, static IP. You do not. As long as they own that property you are dependent upon them and they can dictate the terms of that use.

Be aware that this is not the way it was supposed to be. The internet, right down to its IP core was designed around your freedom to connect.

One way of looking at network citizenship is through the lens of internet protocols and the operation of “the end to end principle.” From wikipedia:

The end-to-end principle is one of the central design principles of the Transmission Control Protocol (TCP) widely used on the Internet as well as in other protocols and distributed systems in general. The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled.

That’s a mouthful. Translated: The internet is designed as a transmission device that is supposed to be controlled by those on ends of a communication. You and the person at the other end. A request from one end is simply passed on to the other end—no single positive, centrally-controlled “circuit” exists. No controller stands in the middle. This is in contrast to the underlying design of the phone network with its centralized circuit switching system that designates a circuit for you and holds it open. (We’re talking about protocols, now….not physical implementation or the practical experience of users.)

Net neutrality battles are raging around the edge of this nascent war. We want to be full citizens of the new order. The incumbents would prefer that we be clients, vassels, and that they be the masters. Right now they are winning. Right now few of us even realize that current order is not necessary or natural—it was arranged for somebody else’s benefit; not for ours.

It really is that simple.

What we need to recognize is the nature of the war. What we need to be fighting for is ownership of our own connection. For full citizenship. To kill the Master-client relationship that constrains our current access to the network.

Ownership of the network is the most complete solution. Any limits we impose on ourselves are limits that we impose; they are not the dictates of the master. We may start out copying what we know in some ways. But that won’t last.

Lafayette, with its community-owned, fiber-based network utility is a good example of how that will work. From the begining things will be different here. We’ll have static IP addresses…and a lot of potential will flow from that. We’ll have full access to the speeds and capacity of our own network–that is what the 100 meg intranet is all about. As it becomes more and more obvious that many of the limits imposed by the current owners are not natural and not in the interests of users we’ll change those aspects as well.

That’s the real value of the battle fought and won here in Lafayette.

Worth thinking about…

$200 PC Available in Lafayette

A Wired blog sez that a $200 Ubuntu Linux PC, sans monitor is now available in Lafayette.

Cool. And it’s especially great for Lafayette.

Why great for Lafayette?

This computer and its software packages come very close to being exactly the computer that the Lafayette Digital Divide Committee recommended in the “Bridging the Digital Divide” document.

That study, which became official policy when it was made an ordinance by the city-parish council, recommended a mix of low cost computers, free open source software, and a local portal/server that leveraged the intranet bandwidth the committee recommended LUS make available to its customers. Let’s take a look at how that has played out:

The key, and hardest, part of that equation was securing the use of full intranet bandwidth—when the committee first recommended Lafayette adopt that policy there was real doubt that it was technically feasible. In short order such doubt was dispelled. Since that time LUS and the city-parish has fully committed to providing at least 100 megs of intranet bandwidth to every user regardless of how much they spend for internet connectivity. Huval and LUS call this “peer to peer bandwidth.” With 100 megs locally available to all users a rich local portal and aggressive use of server-based applications becomes possible. Since much of the computing and handling of large quantities of data can be handled on the network rather than in the users personal computer much less powerful—and hence less expensive—computers can be used.

That brings us back to the subject of todays post: Everex’s TC2502 gPC computer. This ‘puter is available through WalMart for $200 dollars and Wired’s blog carries of list of locations that will stock it that include Lafayette. It is also available over the net from WalMart’s online store. It is sold without a monitor but includes mouse, keyboard and a set of speakers. The desktop computer runs a variant of the free Ubuntu Linux operating system called gOS. Also free is a list of installed open source software including OpenOffice, Firefox web browser, Meebo IM, and Skype, GIMP photo software, the Xing DVD and video player, and Rhythmbox music management software. Even more interesting for local digital divide promoters is that it includes icons linking to Google applications like Mail, Documents, Spreadsheets, Calendar, News, and Maps.

Between LUS’ solid commitment to lower prices for connectivity (which is now more important than computer cost as a barrier to adoption) Google’s online apps, and the emergence of commercially available, low-cost, open source computers like this Everex, the pieces are falling in place for Lafayette to have a digital divide program that will be as unique as the system itself.

The Other YouTube

ToDo & Sunday Thought Departments
Small Print Warning: some curriculum theory from a previous life—cleverly obscured—lies ahead. Please ignore. 🙂

Ok, we all know about YouTube–it is that silly-fascinating site where dogs ride skateboards and people spend a lot of time crying for a fascinated public.

Pure entertainment–in the bad sense of fascinatingly mindless distracting pablum.

But there is the other YouTube.

That YouTube that has created a brand new bottom-up educational format: the short video instruction. It’s fun, it’s popular, it works and it’s what entertainment can be in its best sense: a fascinatingly engaging way to learn. Most educational video shorts—let’s call them “instructables” so we have a less akward handle—are somewhere between two and six minutes long. They focus on some small bit of “doing” like making a nifty techno-toy, or showing a dance move, or throwing a pot on the wheel. The producers are most often advanced users and the consumers anyone who wants to learn “how.”

You might have watched some of these but didn’t have a category to put them in. Here is a nice little example for someone for whom the description doesn’t strike a cord:

That “instructable” is an example of “throwing off the hump.” Potters do that when they want to make a series of similar small items. It’s not an easy thing to describe–books, blackboards, and lecture-halls are not good mediums to convey that variety of learning. It’s the sort of thing that is more usefully “shown.” There is a whole class of things that we’d like to teach which are better shown than described; things that are better experienced than conventionally taught. Video isn’t perfect but these extremely short pieces of “conveyed experience” are very, very useful to the learner. The learner can see multiple examples (e.g.: another throwing off the hump). They are repeatable and they are deep. —Repeatable: if you didn’t see how he finished off the rim, watch it again. They are deep in the sense that by watching it a learner who has had his or her hands in clay can “feel” how thin those walls must be and get a sense for how much “wobble” is tolerated and how many times to “pull” up walls and what to do toward the final curve with each pull. All these things are (inadequately) discussed (at interminable length) in conventional classroom settings as preparation. But advisory rules about wall thickness and pulls are rather direct abstractions from experience whose utility lies in allowing the student to move more quickly and effectively to new experience. They are much better taught after as student has learned to throw a few forms as a way to move toward independent explorations.

(If you can’t get into potting, try the Zydeco demo, or the instructions for making cool LED “throwies” and re-read the above paragraph with your example in mind. You could find similar instructables for welding, making lures, cooking creole, or applying makeup. There is a whole DIY section for you to browse. Let your passions rule)

We don’t teach by example in schools because we don’t have the time. There are too many students in our classes for many of the most effective kinds of instruction to be possible. Instructables approach the one-on-one experience of tutorials. You watch at your own pace, you notice what is meaningful to you, and you can get repeated examples until you “get” the right approach. A real tutorial with the added dimensions of individualized feedback and things like force feedback (holding the students hands against the clay to give the “feel” of the appropriate pressure) would be even more valuable. Even so, instructables are new and valuable form.

This is one of the reasons you should want big bandwidth. To really see some of the details on the potting example you’d want HD-quality videos. I can imagine getting more personalized instruction from afar–if we had the bandwidth. A skilled potter (or master welder) in Lafayette could set up a nice shop and market personalized instruction over the net—if both ends had really big bandwith.

Just for the record: the usefulness of this technique is not, in my judgment, limited to vocational topics or hobbies. Showing and having the student find ways of solving a problem is central to good mathematics instruction. Learning to read is something that has to be shown; letter sounds can pretty much only be “labled” correctly after a student has learned sound out letters by example… Much conventional instruction could be replaced or aided by providing multiple, repeatable, deep examples.

So…something ToDo on this Sunday when you really ought to be at Festivals Acadiens if you are an Acadiana denizen. And something to think about.

PS: Yes, yes…we just got a wheel. What of it? 🙂

Update: 7:28: ooops. I just looked at Boing Boing for the first time in weeks and down the list I spoted a nifty link to how to make clear ice cubes. So naturally I followed it (well, naturally for me). The link goes to a site called “instructables!” I thought I had made up that term–but now it seems more likely that I’ve seen a reference to this site. Which is pretty neat place to visit. (The ice cube link? Right here.)

Boosting Lafayette’s WiFi

Worth Thinking About Dept.

Executive Summary: Wireless provider FON’s recent successes provide an intriguing example for those interested in LUS’ still-unformed wi-fi network.

Recently BT (Britain’s dominant broadband provider) and Time-Warner cut deals with the Spanish wireless outfit FON. FON’s goal is to foster wi-fi bandwidth sharing among its membership, “foneros.” These recent deals are considered breakthroughs because they explicitly encourage users to share their bandwidth, something that network companies have previously forbidden.

The FON Idea:
Any foneros that freely shares their access can get on to any FON access point in the world for free. The company’s ground-up, user-built approach to building a hotspot network contrasts pretty dramatically with the top-down methods by major wireless and phone service providers who build, maintain and charge a healthy fee to access their hotspot network.

Credibility:
While the FON plan sounded impractical to some it gained a prestigous group of backers even before the major partnership announcements in Europe, Britian, and the US; investors include: Google, Skype, Index Ventures, and Sequoia Capital. The latest round of investment brought in major Japanese players and BT invested in the company as part of its deal.

The deals cut with network providers BT (#1 in Britain), Neuf (#2 in France) and Time-Warner (#2 cable internet provider in the US) provide instant credibility for FON’s idea. All those networks’ members (Time-Warner has 6.6 million users) are now “foneros” and wi-fi routers supplied by the company have been flashed with Fon’s software. Future broadband subscribers will be encouraged to buy FON routers and share their connections. In Britain, as a result of BT’s dominant position and high adoption rates, speculation holds that dense urban areas will be nearly completely covered by the FON/BT network.

How it Works:
The new FON member attaches the FON-enabled wi-fi access point to the wired network connection they’ve paid for. FON wi-fi access points are cheap (occasionally free) and are software-configured to provide a public channel and a private, seperately encrypted, channel. The owner of the access point uses the private channel for their own, interior, at-home wi-fi network. The public channel’s bandwidth is controlled by the owner; who limits the bandwidth that is shared with fellow foneros to a portion that doesn’t degrade his or her experience. (Note: there is an alternative make some money off your access if a non-fonero member decides to pay for access through your node.)

Win-Win-Win:
The users get free wi-fi access across the world in exchange for giving up a little bandwidth that they feel they don’t need. FON makes deals with the big providers. The big network providers get instant, user paid-for and user-maintained wi-fi networks to brag on and sell to consumers.

There are advantages besides the obvious laptop uses you see at any coffee house in the city. Having a widely-available wi-fi network means that users of wi-fi enabled phones and devices (think certain PDAs, Nokia phones, and the iPhone) could effectively make phone calls for free from FON hotspots in addition to surfing the web, using email, and working other data-based interactions over the net. There would be no additional connection cost over what they’d already paid for their home network for the connections made away from home.

Whoa! But there ARE problems:
But eager investors and growing user-base based on huge, established ISPs does not mean that all is rosy in Fonero Land. FON is faced with a perverse inverted reflection of the problems of wi-fi based muni broadband efforts.

I’ve discussed the problems of muni wi-fi at some length on these pages. Some of it boils down to the fact that mesh-based muni networks find it hard to provide adequate backhaul unless they have a dense fiber network to hang it off. (We’ve got that one licked here in Lafayette.) But the second part of the problem is that the constraints placed on wi-fi restrict it to low power and its spectrum allocation is such that wi-fi signals find it hard to penetrate dense vegetation and, especially, houses. Most people compute indoors. A public wi-fi network that has a hard time reliably getting inside homes and that makes for a very hard sell as a primary network. (LUS has tentatively solved this by selling fiber as the primary interior connection and making city-wide wi-fi an appropriately cheap add-on that will not be sold as suitable for in-home use.)

If muni wi-fi’s acess-point-on-a-street-pole can’t get in to homes, by the same token FON’s bottom-up in-home network is going to find it hard to get out to the public areas of the neighborhood.

What’s needed is a wireless system with the strengths of both and the weaknesses of neither…

You see it coming, right?
LUS should either partner up with FON or do something similar themselves. (FON’s software is not unique; other, open source software could emulate the basic capacities of the FON wi-fi router.)

LUS will be in a nearly unique position: it will have a FTTH network and a wireless one. The question, as always, is: How to best make use of the unique resources we are building in Lafayette. So far, in my humble opinion, LUS has mostly been making the smart moves. Fiber First is smart–the smartest basic move possible. That makes a strong wireless network possible. Given that starting point, it is smart to go ahead and build wireless mobile capacity as LUS is planning to do. It’s smart to not pretend that wi-fi can be an adequate substitute for a reliable, wired network. LUS isn’t doing that; instead LUS’ wi-fi will be positioned, as it should be, as a low-cost mobility addition. What is ironic is that Lafayette’s wireless network, while relegated to secondary status locally, will be faster and more reliable than any public wi-fi network in the nation; its dense fiber connectivity and the design decision to avoid more than rudimentary use of mesh re-routing assures that.

But, as smart as all that is, LUS’ muscular wi-fi network will still have trouble getting into the home. Coverage will still be spotty and shifting–like cell phone coverage is, only more so. All that is a matter of physics and federal regulation — no amount of smart network design can completely eliminate the issue.

The smart way to minimize coverage problems is to provide both the muni solution for outside, public space and a FON-style solution for interiors. And because LUS will control both sides we can do what nobody else can: integrate the two. LUS would provide coverage on the streets and in public spaces. Subscribers, using FON equipment or similar router software cover their own interiors and their yard away from the street to exactly the degree they find useful for their own private, locked-down wi-fi channel. Piggybacked onto that would be a second, public, channel that would be available to all LUS subscribers. It’d be used by meter readers, police, friends, and folks visiting town who’ve bought the the three-day pass—and Foneros if we go that route. (If we join FON local subscribers could roam on FON points anywhere.) As long as you were visiting locales that used LUS fiber you’d never have to log into a private network. As a mobile user moved down streets, into offices, and visited friends they could, potentially, remain on the public network the entire time and never have to log into anyone’s private network or use any resources that weren’t public.

Near-ubiquity of coverage would allow VOIP phones could become truly useful in the city, making truly mobile wi-fi telephony a reality. WiFi-enabled handhelds, from iPhones, to Blackberries, to Nokia phones, to Skype phones, to various “smart” PDA hybrids would become reliably useful without having to buy into expensive packages from cellular providers, enabling a whole new class of network devices to become cheaply available to everyday Lafayette users.

The Bottom Line:
LUS could sweeten the pot for its subscribers by providing each broadband customers that agrees to share using the LUS-approved equipment and software with an extra meg of “langiappe” bandwidth so that sharing actually provides a small boost in capacity for the subscriber who bought their own router and occasionally shared their extra capacity. Recall also that LUS will (again almost uniquely) be providing every user unthrottled in-system bandwidth. Wi-fi routed packets that stayed inside our system would be under that local use umbrella. The relatively small bandwidth diverted to wifi sharing will be a mere drop in the bucket for the LUS user in that instance.

Lafayette’s resulting wi-fi service would be as nearly flawless as is humanly possible both inside and outside. Segregating public and private networks would increase the security of the subscribers’ personal networks; making wifi networks more secure for regular users than they are today. Subscribers would understand that coverage inside their homes was their responsibility while at the same time gaining access to the public network everywhere. As users found holes in coverage in places where they needed it they could simply move their wi-fi point or add a cheap repeater.

The net effect for LUS would be that the users would plug many of the holes in the city’s cloud themselves–at their own expense–when they felt they needed coverage and only when they did. The resulting network with public channels available both inside and outside participants’ buildings would be more dynamic and more nearly ubiquitous than any in the country. And ubiquity is the major selling point of any wireless mobility network.

The net effect for users would be a robust public network that was available both inside and outside wherever the people that lived or worked there thought it would be useful. That’s simply unavailable anywhere else. A user’s laptop would be more useful than ever. And mobile devices of all kinds would bloom in Lafayette as the price premium for service vanished.

It would be a very profitable collaboration between the community’s telecom utility and its citizen-owners; a collaboration available to almost no one else.

Worth thinking about, don’t you think?

(And a thanks to reader Jon who first pointed me at the BT story….)

“New” Web Business Models in Lafayette

Food For Thought Dept.

Here’s something worth thinking about: Arguably a Lafayette firm is running its business based on what web-folk will tell you is the hottest new cutting edge business model. That firm, as reported by the Advertiser’s Bob Moser, is Fugro Chance. Fugro Chance is a survey company specializing in the Gulf of Mexio. It sells its ability to locate things accurately on a map. That is its product. But Chance appears to know that what has really kept in it in business for 30 years is trust: its customers believe that they can trust them to locate things accurately and they trust Chance isn’t about to turn their special knowledge into an excuse to rip off their customers. So their customers return…

What got Furgo Chance an admiring piece in the paper is that they gave away their most valuable product, a comprehensive map of the pipelines, old and new, active and inactive, in the Gulf, for free. Apparently no one else has the history and focus to match their expertise and after the storms of ’05 ripped up the Gulf offshore platforms an accurate map of the pipelines was crucial to quick, efficient recovery. Everyone from FEMA to 200 industry insiders needed the map. They got it. From the story:

They could have charged thousands of dollars for this map, and most would have paid it. But this mainstay of oil and gas mapping knew what was right, says Marine Data Manager Lionel Cormier. Plus, generosity builds loyalty.

“We e-mailed pdf files (of the map) possibly to 200 people within a few weeks of the hurricanes, it was a handout to the industry,” Cormier said. “We felt we were the only one who could produce that map in that timeframe. … There was more to win than to lose.”

That might not strike you as exactly a hot, new, cutting edge business strategy. It might seem remarkably long-sighted in a business climate that trumpets short-term gains and ruthless, immediate, exploitation of every advantage over you customers as a some sort of business virtue leading to “maximizing ROI.” You may remember a time when people understood that greed wasn’t good business. But this approach to business probably doesn’t strike you as new; rather it seems like the “old” model.

But that might be because you’re from down the bayou…from a place where shopkeepers used to give away lagniappe in an effort to give “a little extra” in the form of an inexpensive treat for the kids or the customer. That little extra served to prove that the transaction wasn’t purely motivated by faceless profit-taking; that the store owner was willing to give a little back in a form that acknowledged the life of the customer.

Not everybody has that, or a similar experience, in their history.

There’s a lot of hoo-ha online about “new” business models (for example, Google) that involve giving away valuable products (like maps or search) and showing respect for your customers (by not abusing their trust) in return for customer loyalty toward your product (in Google’s case a tolerance for their advertising). Similarly Linux’s open source business model is built around a free “product,” the Linux operating system. What is sold is the expertise to extend the product and to provide high levels of support and integration. In a word: trust.

That trust-based business model is reported to be some sort of new discovery driven by network economics and constructed by brilliant young bi-coastal entrepreneurs and especially suited to the internet’s economy.

Now giving away your central product–as Google arguably does with its search engine results—might seem a new element that would justify.

But right chere in Lafayette, cher, there is the example of Furgo-Chance; who operates successfully in the cutthroat oil industry to prove that the gift economy—the framework for understanding Google, Commons-based peer production, and the open source buisness model—isn’t particularly new nor something particularly suited to the internet.

The old and the new collide in Lafayette. It’d be a good thing—and a wise thing—if local tech businesses were to learn the lessons taught by both Linux and old french shopkeepers: business is about Trust. Dollars are a by-product.

PhotoSynth & Web 2.0—Worth Thinking About

Sunday Thought Dept.

Warning: this is seriously different from the usual fare here but fits roughly into my occasional “Sunday Thought” posts. I’ve been thinking hard about how to make the web more compelling for users and especially how to integrate the local interests that seem so weakly represented on the internet. As part of that exploration I ran across a research program labeled “PhotoSynth.” It offers a way to integrate “place” into the abstract digital world of the web in a pretty compelling way if your interest is in localism: it automatically recreates a 3 dimensional world from any random set of photographs of a scene and allows tags and links to be embedded in them. Once anyone has tagged a local feature (say the fireman’s statue on Vermillion St. or a associated a review with a picture of Don’s Seafood downtown.) everyone else’s images are, in effect, enriched by their ability to “inherit” that information.

But it seems that it is a lot more than just the best thing to happen to advocates of web localism in a long time. It’s very fundamental stuff, I think, with implications far beyond building a better local web portal…. Read On…

—————————-
Photosynth aka “Photo Tourism” encapsulates a couple of ideas that are well worth thinking hard about. Potentially this technical tour de force provides a new, automated, and actually valuable way of building representations of the world we live in.

This is a big deal.

Before I get all abstract on you (as I am determined to do) let me strongly encourage you to first take a look at the most basic technical ideas behind what I’m talking about. Please take the time to absorb a five and a half minute video illustrating the technology. If you’re more a textural learner you can take a quick look at the text-based, photo-illustrated overview from the Washington State/MS lab. But I recommend trying the video first.

(Sorry this video was removed by YouTube).

You did that? Good; thanks….otherwise the rest will be pretty opaque—more difficult to understand than it needs to be.

One way to look at what the technology does is that it recreates a digitized 3D world from a 2D one. It builds a fully digital 3D model of the world from multiple 2D photos. Many users contribute their “bits” of imagery and, together, they are automatically interlinked to yield, out of multiple points of view, a “rounded” representation of the scene. The linkages between images are established on the basis of data inside the image–on the basis of their partial overlap—and ultimately on the basis of their actually existing next to each other—and this is done without the considered decisions of engaged humans.

Why is that a big deal?

Because its not all handmade. Today’s web is stunningly valuable but it is also almost completely hand-made. Each image or word is purpose-chosen for its small niche on a web page or in its fragment of context. The links that connect the web’s parts are (for the most part) hand-crafted as well and represent someone’s thoughtful decision. Attempts to automate the construction of the web, to automatically create useful links, have failed miserably—largely because connections need to be meaningful in terms of the user’s purpose and algorithms don’t grok meaning or purpose.

The web has been limited by its hand-crafted nature. There is information (of all sorts, from videos of pottery being thrown, to bird calls, to statistical tables) out there we can’t get to—or even get an indication that we ought to want to get to. We rely mostly on links to find as much as we do and those rely on people making the decision to hand-craft them. But we don’t have the time, or the inclination, to make explicit and machine-readable all the useful associations that lend meaning to what encounter in our lives. So the web remains oddly thin—it consists of the few things that are both easy enough and inordinately important enough to a few of our fellows to get represented on the net. It is their overwhelming number and the fact that we are all competent in our own special domains that makes the web so varied and fascinating.

You might think that web search, most notably the big success story of the current web, Google’s, serves as a ready substitute for consciously crafted links. We think Google links us to appropriate pages without human intervention. But we’re not quite right—Google’s underlying set of algorithms, collectively known as “PageRank,” mostly just ranks pages by reference to how many other pages link to those pages and weights those by the links form other sites that those pages receive…and so on. To the extent that web search works it relies on making use of handmade links. The little fleas algorithm.™ It’s handmade links all the way down.

Google was merely the first to effectively repackage human judgment. You’ve heard of web 2.0? (More) The idea that underpins that widely hyped craze is that you can go to your users to supply the content, the meaning, the links. That too is symptomatic of what I’m trying to point to here: the model that relies solely on the web being built by “developers” who are guessing their users needs has reached its limits.

That’s why Web 2.0 is a big deal: The folks designing the web are groping toward a realization of their limits, how to deal with them, and keep the utility of the web growing.

It is against that backdrop that PhotoSynth appears. It represents another path toward a richer web. The technologies it uses have been combined to contextually indexes images based on their location in the real, physical world. The physical world becomes its own index—one that exist independently of hand-crafted links. Both Google and Yahoo have been looking for a way to harness “localism,” recognizing that they are missing a lot of what is important to users by not being able to locate places, events, and things that are close to the user’s physical location.

The new “physical index” would quickly become intertwined with the meaning-based web we have developed. Every photo that you own would, once correlated with the PhotoSynth image, “inherit” all the tags and links embedded in all the other imagery there or nearby. More and more photos are tagged with meta-data and sites like flicker allow you to annotate elements of the photograph (as does PhotoSynth). The tags and links represented tie back into the already established web of hand-crafted links and knit them together in new ways. And it potentially goes further: Image formats typically already support time stamps and often a time stamp is registered in a digital photograph’s metadata even when the user is unaware of it. Though I’ve not seen any sign thatPhotoSynth makes use of time data it would be clearly be almost trivial to add that functionality. And that would add an automatic “time index” to the mix. So if you wanted to see pictures of the Vatican in every season you could…or view images stretching back to antiquity.

It’s easy to fantasize about how place, time, and meaning-based linking might work together. Let’s suppose you stumble across a nifty picture of an African Dance troupe. Metadata links that to a date and location—Lafayette in April of 2005. A user tag associated with the picture is “Festival International.” From there you get to the Festival International de Louisiane website. You pull up—effectively create—a 3-D image of the Downtown venue recreated from photos centered on the stage 50 feet from where the metadata says the picture was taken. A bit of exploration in the area finds Don’s Seafood, the Louisiana Crafts Guild, a nifty fireman’s statue, a fountain (with an amazing number of available photos) and another stage. That stage has a lot of associations with “Zydeco” and “Cajun” and “Creole.” You find yourself virtually at the old “El Sido’s,” get a look at the neighborhood and begin to wonder about the connections between place, poverty, culture, and music….

The technologies used in SynthPhoto are not new or unique. Putting them all together is…and potentially points the way toward a very powerful way to enhance the web and make it more powerfully local.

Worth thinking about on a quiet Sunday afternoon.

Lots o’ Langiappe:

TED talk Video — uses a Flickr data set to illustrate how the program can scoop up any imagry. This was the first reference I fell across.

Photo Tourism Video — Explains the basics, using the photo tourism interface. Shows the annotation feature of the program…

Roots of PhotoSynth Research Video—an interesting background bit…seadragon, stitching & virtual tourist, 3D extraction….

PhotoSynth on the Web Video: a version of the program is shown running in a web browser; only available to late version Microsoft users. (Web Site)

Microsoft Interactive Visual Media Group Site. Several of these projects look very interesting—and you can see how some of the technologies deployed in PhotoSynth have been used in other contexts.

Microsoft PhotoSynth Homepage

Verizon’s fiber-optic payoff | CNET News.com

Food for Thought, Learning from the Big Guys Division:

“Verizon’s fiber-optic payoff;” The premise of this story is that Verizon did right by going with a Fiber To The Home plan…and AT&T/BellSouth messed up by trying to get by on the cheap. Verizon will have the bandwidth and the flexibility to compete more than adequately against the cable companies. But AT&T will not, on the basis of this author’s analysis.

Lafayette can mine Verizon’s experience for insight into how an all-fiber network can compete against the cablecos. Verizon will be several years ahead of LUS in the deployment of a new fiber system and its successes can be followed and its failures avoided. Verizon is, happily for Lafayette, not the incumbent locally. It’s enormous numbers will allow suppliers and developers for its products to supply a place like Lafayette almost as an afterthought–and at reasonable prices since the big-ticket purchaser is the giant Verizon. Even better, the also-rans on giant Verizon contracts will be looking for a place to prove their ideas. If Lafayette’s buyer’s are wise we’ll be able to cut some interesting deals on cutting-edge ideas that badly need a place to demonstrate that their ideas are viable before marketing it to the big fellow. A real danger has always been that LUS would be so far ahead of what the market in other places can provide that useful products would have to be untested and hand-crafted for us. That could be exciting…but expensive. Much better to have a big trailblazer proving basic concepts somewhere else. Then we only have to lay in a better implementation.

Apparently Verizon is succeeding. Thought its stock has taken a hit because of its heavy, long term investment in fiber has suppressed earnings its subscriber numbers are very healthy for the same reason. Its bet on a combination of old-style cable technology, fancy new IPTV for the extras, and fiber to the home capacity is allowing the company success in delivering both a high-quality, reliable TV experience, and fancy new IP services. AT&T, on the other hand, has high stock prices but low subscriber numbers for its new hybrid fiber/DSL that uses unstable IP for all its services. If I were looking at a long-term investment I know where my money would go.

The ticket for Verizon so far appears to be sticking to the absolutely reliable technology on the cash cow—cable TV—and using the capacity of fiber to deliver more channels; especially to deliver more bandwidth-hungry HDTV. In addition they are mounting an aggressive push into IP-based services. Verizon is clear about the need to migrate to a full IP system as soon as the technology is proven and it, and AT&T’s committment, ensures that the kinks will be worked out of IPTV sooner rather than later. All in all Verizon’s success validates the similar decisions made by the technical guys at LUS; the bottom line is that it’s good news for LUS.

Is there a downside for Lafayette? Sure. LUS may not compete with Verizon but Cox does. And as Verizon proves that fiber means deadly competition Cox is more likely to feel the pressure to develop effective ways to expand its own bandwidth and competitive delivery of services. But that’s not all bad of course–for the consumer. The catch for Cox is that Verizon is currently succeeding without competing much on price. LUS will and that and the local loyalty that LUS has gained (and Cox forfeited) makes LUS a much tougher target locally than Verizon is through most of its footprint.

The real loser? AT&T/BellSouth who will have the least capable system in the city and very little room to really compete on price.

Creating a Lafayette Commons

Here’s something that’s worth the read on a rainy Sunday afternoon. It’s an inspiring essay titled “Reclaiming the Commons” by David Bollier. He bitterly complains about the growing tendency to allow our common resources and heritage —from concrete public property like oil or grazing rights on public land, to more abstract rights to goods created by regulation like the electro-magnetitc spectrum, to truly abstract (but very real and increasingly valuable) rights to common ideas and intellectual resources— those common resources are being taken from us and handed over to the few.

The argument is that we are all poorer for it. And that society would be richer if those things remained in the public domain. He convincingly argues that undue private ownership of ideas stifles the invention of new variants and new ideas.

The point for Lafayette is that we are about to create a new common resource: the Lafayette fiber intranet, and we are creating it as a publicly-owned resource. If Bollier is right then we have a real responsibility to make sure that our common property serves the common good and that it not be “enclosed” by the few.

We here in Lafayette will be in the nearly unique position of commonly owning a completely up-to-date telecommunications infrastructure ranging from a fiber-all-the-way-to-the-home network, to a wifi network using the capacity of that fiber. A citizen who wants to will be able to get all of his telecommunications needs met using local resources, resources that are owned in common.

A lot of what is missing on the web is access to local resources-the church calendar, the schedule for the shrimp truck, what vegetables are available at the farmer’s market, specials at the local restaurants, nearby childcare, adult ed resources, local jobs…and much more aren’t available or are the next best thing to impossible to usefully gather in one place. We could, by acting together, fix that.

Creating a thriving network-based commons is the task that is set before us. Bollier gives us some insight into the magnitude of what is at stake. We can, by the way we participate and what we create, create a truly common and truly valuable resource.

Give the Bollier article a look. Then think a bit about what we can do to make ours a transparently valuable network–one that will encourage all to participate fully.

(Hey, we can be idealists if we want. :-))

Thinking in Tucson, AZ: Getting it Right

Muniwireless points to a study, meant to inform about how to write up a request for proposals for Tucson’s prospective wireless RFP that caught my attention. First, the extent of the research and the detail in the study far exceeds that which goes into most full proposals, much less the RFP. A large amount of information about broadband usage, digital divide issues, and market questions is in this study—enough to provide plenty of well-researched data to support both public purposes (like economic expansion and bridging the digital divide) and to support a strong marketing plan (it includes current costs of broadband and geographical usage patterns).

Lafayette needs such a public document. Without the baseline it provides it will be difficult to demonstrate the success of the fiber project. You need such a baseline to demonstrate the economic benefits and to document the effects of lower cost broadband on bringing new faces into the broadband world.

But if possible, even more impressive than the original survey research was the quality of thought exhibited. Doing a study like this is a job–and most folks are tempted to do the job to specs even if that is not what is called for by the reality of the situation. CTC, the consultants doing this study didn’t succumb to that temptation. The job specs, it is clear, were to tell the city how to write an RFP that get private agencies to provide city-wide wifi without municipal investment. Universal coverage, closing the digital divide and economic development were apparently important parameters given the consultants.

Trouble is, it’s become clear that the private sector simply won’t, and perhaps can’t, fill that wishlist. And CTC, instead of just laying out what would give such an RFP the best chance, more or less told the city it couldn’t have all that without at least committing as the major anchor tenet. That was responsible, if unlikely to make the clients happy. And on at least two other points (Digital Divide issues and Fiber) they pushed their clients hard.

1) Digital Divide issues:

The interviews indicated that as computers become more affordable, the digital inclusion challenge that needs to be addressed is not as much equipment-based but rather how to overcome the monthly Internet access charge. (p. 18)

Concentrate WiFi provider efforts on low-cost or free access – not the other elements of the digital divide. (p. 17)

Entering the digital community is no longer about hardware; it’s about connectivity. The hardware is a one-time expense that is getting smaller and smaller with each day. Owning a computer is no longer the issue it once was. Keeping it connected is the real fiscal barrier these days. As their survey work shows, the people most effected know this themselves.

A CTC review of Lafayette’s project would note we’re doing several things they say most cities neglect to do: 1) LUS has consistently pushed lower prices as it major contribution to closing the digital divide—(and we must make sure that there is an extremely affordable lower tier available on both the FTTH and the WiFi components). 2) Ubiquitous coverage is a forgone conclusion; LUS will serve all–something no incumbent will promise (and something they have fought to prevent localities from requiring). 3) Avoiding means-testing. Lafayette’s planned solutions are all available to all…but most valuable and attractive to those with the least. Means-testing works (and is intended to work) to reduce the number of people taking advantage of the means-tested program. If closing the digital divide is the purpose means-testing is counterproductive.

About hardware, yes, working to systematically lower the costs and accessibility of hardware through wise selection, quantity purchase, and allowing people to pay off an inexpensive computer with a small amount each month on their telecom bill makes a lot of sense and should be pursued. But the prize is universal service and lowering the price of connectivity. Eyes, as is said, on the prize.

CTC additionally recommends against allowing extremely low speeds for the inclusion tier and for a built-in process for increasing that speed as the network proves itself. It also rejects the walled-garden approach, an approach which they discreetly don’t say out loud, turns the inclusion tier into a private reserve that will inevitably be run for the profit of the provider.

Good thinking…

2) The Necessity of Fiber

CTC also boldly emphasized fiber, not wireless, as the most desirable endpoint for Tucson.

We strongly recommend that the City of Tucson view the WiFi effort as a necessary first step, then look at ways to embrace and encourage incremental steps toward fiber deployment to large business and institutions, then smaller business, and eventually to all households. (p. 19)

Although wireless technologies will continue to evolve at a rapid pace, wireless will not replace fiber for delivering high-capacity circuits to fixed locations. In addition, fiber will always be a necessary component of any wireless network because it boosts capacity and speed. (p. 20)

The report explicitly rejects the theory that wireless will ever become the chief method for providing broadband service to fixed locations like businesses or homes. Few in the business of consulting on municipal wireless networking are so forthright in discussing the limitations of wireless technologies and the role of fiber in creating a successful wireless network that is focused on what wireless does best: mobile computing.

Again, good thinking.

Communities would do well to think clearly about what they want, what is possible, and the roles of fiber and wireless technologies can play in their communities’ futures. CTC has done a real service to the people of Tuscon. Too much unsupported and insupportable hype has driven muni wireless projects. That unrealistic start will come back to haunt municipal broadband efforts nationally as the failed assumptions show up in the form of failed projects. But those mistakes were not inevitable. The people of Lafayette should take some comfort in the fact that we haven’t made the sorts of mistakes that Tuscon’s consultants warn against and are planning on implementing its most crucial recommendations.