As we automate more information work, what will be left for humans to do?

That’s the question that directs my research.  We have made a great many of assumptions, and produced a number of theories over the years regarding the relationship between technology, work and organizations.  Most of these assumptions and theories have run into a few bumps when information technologies are concerned.

So where are we headed, as our capacity to automate work increases?  Are we running out of things to do?  Or do increasingly capable, and (gasp) intelligent  technologies simply take our work and organizations in direction we never anticipated?

What will widespread access to genetic information mean for society?

As someone who has been exposed to a fair amount of research in the management realm, I am often shocked/awed/dismayed by the use of psychological testing in the hiring process. These tests play a statistical game that really should only be played by those understanding the rules.

When I see these new products in the world of genetic data, like 23andME, I get a little concerned. We might as well accept at this point in time that someday, you will exchange these kind of data either before or after an employment agreement. Many people would consider the privacy factor too overwhelming to expect genetic information to be part of the employment process, but the facts of the reality suggest that private employers are not bound by the same rules for private data as many assume. We are often and quite legally monitored at work, depending upon the state in which we are employed, we can be fired for our political beliefs (even if those beliefs are expressed outside of the workplace).

Anyhoo. Arrington, over at TechCrunch, released some screen shots and thoughts on his test data courtesy 23andME. As the tests that underlie these kind of services grow in size and focus, the data will only get more “reliable.” Firms can and will hire on the basis of the odds expressed in these results.

Oops, there’s another subatomic particle…

So I am reading this book on artificial intelligence.  I reckon its an “old school” kind of book… Artificial Intelligence, by Philip Jackson.  Even when my head spins a bit from the mathematico-logical stuff filling the pages, I am struck by the casual and approachable style of the author.  Maybe this makes ts a coffee table book on AI, but I don’t think this is truly the case.  There is something of value in being able to communicate complex ideas in simple terms.  Or to write in a scholarly, yet conversational tone.  In fact, this seem to be a lost art in academic circles. 

In a section early in the book discussing just what kinds of problems can be expressed mathematically, Jackson spins out “ooops, there’s another subatomic particle.”  I figured that was a line you don’t run across everyday.

This bot has eyes, this one has none

Image recognition has become a hot topic in the software world, due largely to the major success of sites like Flickr (and Yahoo! Photos), asdf, and asdf.  Slowly, software is being built resulting not only in applications that can “detect” the contents of a photos, but also bots that can “recognize” these contents by way of a comparision amongst any number of other photos.  The distinction of recognition is an important one, since at that point an application requires a sort of intelligence – the capacity to place an image within an network of relationships involving any number of photos.  This is like that.  You tell machine “that” is David, and now it can go out and find other photos of David, wherever they might be.  Therein resides the bot beyond the initial application.

A piece on Cnet discusses some of the issues behind photo recognition bots, in particular through a discussion with the executives of Riya, a photo startup.  There is also a rather introductory discussion of language search bots, and the desire to move beyond “keywordese”- a word Barney Pell uses to describe the funny language we have to learn in order to get keyword search to really find those things for which we are learning.  There are also a few comments from Esther Dyson, by way of a startup placed in the story in which Esther has (go figure) invested.

Tom Mitchell, chair of Carnegie Mellon’s machine learning group reckons truly language smart search bots, able to “read” the web, will run rampant by 2015.  That’s only about eight years from now.  I don’t know what I think about such a prediction, but it would seem a reasonable chance.  Stats based systems will probably make the first go at it, but I wonder whether more “heuristic”-like bots will have a better chance.  That seems to be the way in which we learn language, first by simple words, then by simple phrases that have a meaning.  Once these words, and networks can be placed in context, and connected to each other, well, it would seem you have the capacity for knowledge.

Regional genetic coding of commodities

The WSJ raised some eyebrows today by highlighting some of the techniques tech companies use to maintain different regional price markets for the nearly identical products. Examples include power supplies with voltage limits (Apple and Nintendo) and inkjet cartridges that buddy up only with regionally similar printers (HP). Its only a matter of time before biotech goes the way of the tech firm, but in an even more creative manner.

The genetic engineering of food stuffs is a science that breeds a whole dynamic of response from foodies the world over. In the wake of our renewed interest in manipulating genes, governments are passing laws defining just what can and cannot be sold in their countries. In essence, we stepping closer and closer to regionally coding the genes of the food we will eat.

For the better proportion of the last few hundred thousand years, or so, corn has been…well.. corn. Sure, you might have had white corn, feed corn speckled corn and those little candy corns. But the differences between corn styles were clear to the consumer and only rarely did a government outlaw candy corn.

As we head into this next millenium however, the differences between corns, soybeans, or any other kind of veggie your mother used to make you eat, are becoming ever the more subtle and complex. Varietals designed to be impervious to bugs, pesticides and a suite of other ills are emerging. And as different countries push through unique legislation, for the first time in a very long time, corn may not simply be corn.

We already see some of the market consequences as organic veggies carry higher prices. But now think of something a bit more engineered and stockholder-oriented. Like the drugs on our shelves, governments will be lobbied to approve only certain genetic alterations. Beyond patent protections, these lobbying efforts will involve the legislative approval of only certain designs, or the legislative disapproval of most other designs. A well-placed patent, combined with a legislative backboard could be combined to convert commodities into mini-monopolies.