Years of Devices

Patrick with many devices
Over the years I’ve had many computers that are now considered “devices”. The following are not so much reviews as a retrospective on what has changed and what hasn’t.

eMate 300

The oldest device in this photo. I used this in school for taking notes and writing. Easily the device with the longest useful battery life. Would go days without recharging with regular use. Charging was fast enough that I never really worried about if it was charged or not. Great keyboard, great form factor. Newton OS was great for note taking and writing. Networking was totally beyond it. Syncing was usually done via IRDA and was fast enough for text, just.

Visor Pro MIA

The kids found the charger, and the keyboard for my Visor Pro, and even a GPS module. The Visor Pro itself is missing. This little Palm OS device replaced the eMate. In every possible way it was worse. … except that it synced over USB and I could use it with a more modern desktop computer. The keyboard was a 4 fold Stowaway, I broke 2 in the year or two of use this saw. Still no networking ability at all. After just a month or two in college with this I gave up on the idea of a smaller device for note taking and bought a 12in Powerbook.

palmOne Treo

My first network enabled device. I had a phone module (refurbished) for the Visor, but it never really worked right. The web was totally unusable on the Treo, but it did email fine. Also used something to convert webpages, and feeds into palm doc files, would use that to read news and other things on the Treo. Keyboard wasn’t really usable for more then a quick response. Battery life was reasonably awful as well. This was the first device that basically required batter baby sitting from me.

Nokia Internet Tablet 770

Network enabled, but not cell enabled. First device that did mobile web browsing well enough to use. Browser was still very limited and frustrating. Email worked great, multi service IM worked great. Never saw daily use thanks to lack luster battery life, and WiFi only networking.

Nokia N810 Internet Tablet

Best build quality and form factor of any of the devices pictured. Slide out keyboard worked for long emails, short note taking, IM conversations, etc. Camera was horrible, shouldn’t have bothered with it. Used with a bluetooth paired smart phone for 3G internet access. Battery life of N810 was fine, battery life of any cell phone running as a network access point was horrible. Used while traveling, or in long meetings. Attempts to use as GPS device lead to the next item on the list…

TomTom GO720

Dedicated GPS device. Also very bad iPod, MP3 player. Provided decent GPS directions that kept us from getting lost while traveling and moving to a new state. Everything other then the GPS functionality was pointless. Battery life a joke, if the USB power adapter bumped lose on the rough roads around here usually turned off a few minutes later with no warning.

Pair of Nokia 6650 phones

My wife and I used these phones for a long time. Good battery life, good signal, working bluetooth pairing. Calendar syncing never really worked right. Total joke trying to use the web. Wife’s was replaced with an iPhone 4S, mine with the next device.

Google One

Android at this point didn’t really work. I had this phone and would trade of a SIM between it and the Nokia, I never managed more then a day or so without going back to the Nokia. Simple things like TAKING A PHONE CALL, would often fail for odd and confusing reasons. Build quality wise, this was the WORST of the list. Keyboard was worse to use then the Treo.

Samsung Galaxy S

Web browsing worked fine, email worked fine. Typing was about the level of the Treo for speed but I found I did type longer with the soft keyboard then the old hardware one. Anything longer then an email was still painful. GPS sort of worked, but was overcome by the devices spectacular failure. Battery Life, the Galaxy S could NOT make it a whole day of use no matter how much I tried to baby it. I eventually got a custom home rolled kernel with a number of patches and the rest of the CyanogenMod distribution to work, but even then it would limp home with maybe 4% battery left. God help you if you made the mistake of turning on the GPS. Contemplating using more then one battery ended the day Apple shipped enough iPhone 5’s to the San Francisco Apple Store.

iPhone 5

Taking the photo, thus not in the photo. Best mobile web browser ever (Chrome iOS). Best email ever (Gmail). Best camera in a device ever. Totally sucks for note taking. Typing is no better then it was on the Treo. Contact and calendar management has gone no where since the Newton. Battery life… well, it makes it a whole day so battery life is just barely good enough.

Raspberry Pi

The best little computer ever. For less then $150 you can have a complete computer (keyboard, mouse, power supply, monitor) running Debian Linux that fits in your palm. This is magical. You don’t need to write your own board support package for it, you don’t need to flash your own boot loader eproms, you don’t have to hack around in Forth to GET your CPU to start booting in the first place. The computer is fast enough you don’t have to setup a complex cross compiler tool chain to get started, you can just apt-get install gcc. For anyone who worked in the embedded space a decade ago this little box is magical.

time vs data

There is a bug in HTML5 to remove time and replace it with data.

I think in general that <data> isn’t a bad idea. I agree that only being able to talk about <time> was a bit odd, and that many other values have the same effective use case. As mentioned by Ian Hixen:

  • dimensionless numbers (2.3)
  • numbers with units (5kg)
  • enumerated values, like months (February) or days of the week (Monday)
  • durations

However, I don’t think that data as it stands right now is done. As this example of the old <time> and new <data> shows there is a fairly heavy semantics loss from the change.

Published <time pubdate datetime="2009-09-15T14:54-07:00">on 2009/09/15 at 2:54pm</time> 

Assigned data to RDF via magic human transformation step simply to talk about what data is there. Displayed in Turtle.

 @prefix magic: <https://gavin.carothers.name/vocabs/magic#> . 
<> magic:pubdate "2009-09-15T14:54-07:00"^^xsd:dateTime . 

The HTML contains a reasonable amount of data. We know that the contents of datatime is an ISO datatime, we know that the relationship between that date time and the page is it’s publication date.

Published <data value="2009-09-15T14:54-07:00">on 2009/09/15 at 2:54pm</data> 

Again magic human transformation to Turtle.

@prefix magic: <https://gavin.carothers.name/vocabs/magic#> . 
[] magic:unknown "2009-09-15T14:54-07:00" . 

This time around don’t know much at all. We know that “2009-09-15T14:54-07:00” may some how be related to the current page. We don’t know how the string is formated, nor how the string is related to the page if it is at all.

As currently proposed <data> sure looks like a step backwards, but maybe it can be a step forward.

Just some invalid markup:

<data type="dateTime" value="2009-09-15T14:54-07:00" property="pubdate"> 

Just some RDFa:

<data datatype="xsd:dateTime" content="2009-09-15T14:54-07:00" property="magic:pubdate"> 

Just some microdata plus some datatype:

<data type="dateTime" value="2009-09-15T14:54-07:00" itemprop="pubdate"> 

Adding a generic data element without data typing doesn’t seem like a good idea. With datatyping I can see it working better then creating an element for every kind of data.

schema.org as RDFa (Part I)

schema.org an initiative by Google claims once again that RDFa is too complicated. That’s not really true. Here in fact are the first examples from schema.org in RDFa. There is a bonus as well for using RDFa rather then Microdata… you can test that RDFa is valid and gives you what you expect TODAY. Microdata and schema.org? No validation (STILL!), and no public parsers.

A movie

The first example from Schema.org is about marking up a movie:

schema.org
<div itemscope itemtype ="http://schema.org/Movie">
  <h1 itemprop="name"&g;Avatar</h1>
  <div itemprop="director" itemscope itemtype="http://schema.org/Person">
  Director: <span itemprop="name">James Cameron</span> (born <span itemprop="birthDate">August 16, 1954)</span>
  </div>
  <span itemprop="genre">Science fiction</span>
  <a href="../movies/avatar-theatrical-trailer.html" itemprop="trailer">Trailer</a>
</div>
RDFa
<div vocab="http://schema.org/" typeof="Movie">
 <h1 property="name">Avatar</h1>
 <div rel="director">Director:
   <span typeof="Person"><span property="name">James Cameron</span>
   (born <time property="birthDate" datetime="1954-08-16">August 16, 1954</time>)
   </span>
 </div>
 <span property="genre">Science fiction</span>
 <a rel="trailer" href="../movies/avatar-theatrical-trailer.html">Trailer</a>
</div>

That wasn’t very complicated. In fact compared with the example on Schema.org, why do some attributes need fully qualified URIs/IRIs and some don’t? How does microdata know that director is refering to schema.org’s director and not some other one?

Next is Spinal Tap!

<div vocab="http://schema.org/" typeof="Event">
 <div property="name">Spinal Tap</div>
 <span property="description">One of the loudest bands ever
 reunites for an unforgettable two-day show.</span>
 Event date:
 <time property="startDate" datetime="2011-05-08T19:30">May 8, 7:30pm</time>
</div>

Okay not even bothering with the Microdata/schema.org version. There are a few differences but not much. Where is all this complexity that RDFa introduces?

An offer, for a blender

<div vocab="http://schema.org/" type="Offer">
 <span property="name">Blend-O-Matic</span>
 <span property="price">$19.95</span>
 <link rel="availability" href="http://schema.org/InStock"/>Available today!
</div

Yeah, still not seeing why RDFa is more complicated.

Okay, that’s all I feel like dealing with before breakfast. Will look a few more later.

In the mean time, gratuitous baby pictures!

Update:

By specific request, RDFa for geo tagging:

schema.org
<div itemscope itemtype="http://schema.org/Place"
 <h1>What is the latitude and longitude of the <span itemprop="name">Empire State Building</span>?<h1>
 Answer:
 <div itemprop="geo" itemscope itemtype="http://schema.org/GeoCoordinates">
 Latitude: 40 deg 44 min 54.36 sec N
 Longitude: 73 deg 59 min 8.5 dec W
 <meta itemprop="latitude" content="40.75" />
 <meta itemprop="latitude" content="73.98" />
 </div>
</div>
schema.org as RDFa
<div vocab="http://schema.org/" typeof="Place"
 <h1>What is the latitude and longitude of the <span property="name">Empire State Building</span>?<h1>
 Answer:
 <div rel="geo">
 Latitude: 40 deg 44 min 54.36 sec N
 Longitude: 73 deg 59 min 8.5 dec W
 <span typeof="GeoCoordinates">
 <meta property="latitude" content="40.75" />
 <meta property="longitude" content="73.98" />
 </span>
 </div>
</div>
RDFish RDFa
 <div prefix="schema: http://schema.org/ dc: http://purl.org/dc/terms/ pos: http://www.w3.org/2003/01/geo/wgs84_pos#" 
  typeof="schema:Place pos:SpatialThing"
 <h1>What is the latitude and longitude of the <span property="dc:title schema:name">Empire State Building</span>?<h1>
 Answer:
 Latitude: 40 deg 44 min 54.36 sec N
 Longitude: 73 deg 59 min 8.5 dec W
 <meta property="pos:latitude schema:latitude" content="40.75" />
 <meta property="pos:longitude schema:longitude" content="73.98" />
 </div>
</div>

As with the earlier conversions, these are 5 minute jobs without really spending much time thinking about them. But these WORK, can be validated today and are still very simple. The RDFish version above does start to use some RDFa features that are considered confusing. It uses 3 vocabularies rather then just one. To do this it does use the much feared PREFIX. I’ve covered my opinion of prefixes before. I still stand by my statement that prefixes are simply not that complicated. Also, the RDFish version does drop the added GeoCoordinates instance, and intermediate schema:geo property, didn’t really see why they were there. Other possible improvements include adding datatypes to the properties in RDFa, but that’s not really necessary in this case.

RDF Database Expectations

Background

I use RDF databases to store 100% of O’Reilly Media’s product metadata. The catalog pages, shopping cart, electronic media distribution, product registration process, ONIX distribution, and most internal product reporting is based on RDF. The following are observations of what is necessary from a RDF database in order to successfully and easily build a similar system. As for what the clients need to be able to do… working on that. The features required are listed in descending order of priority to me.

SPARQL

An RDF or semantic database that does not support at least SPARQL 1.0 is not interesting. Writing queries in Prolog, XQuery, or another DSL is not acceptable. Getting folks to understand graphs and RDF is hard enough without also having to teach them languages that don’t work easily with graphs.

Correct

When running a simple query that works fine on many other implementations I don’t expect to find INCORRECT results. Throwing errors and saying something is unimplemented isn’t great but far better then returning results that are just WRONG.

SPARQL +

SPARQL doesn’t really do enough without extensions. The features I’ve found to be most useful are LET, and GROUP BY. LET can be used to “fake” bind parameters, create synthetic values for reports and make complex queries much simpler. Without GROUP BY nasty post processing is often necessary to produce summary reports from SPARQL queries. Other helpful functions are the XPath functions, a good set of always useful tools that I already know from years of work in XSLT and XQuery.

Named Graphs

Named graphs allow me to treat the RDF database as a document store. This rapidly reduces the complexity of loading and managing ETL operations. The SPARQL 1.1 Uniform HTTP Protocol for Managing RDF Graphs makes me very happy, and maps neatly on top of solutions that I’d already implemented before I even knew the SPARQL 1.1 Working group existed. See Tenuki for my own implementation of graph updates over HTTP. ChangeSets Talis style are useful too, but have been more complicated to generate then I had expected.

Concurrent

I expect to be able to write updates to graphs and read from graphs at the same time. I’ve encountered limitations related to Multi Reader Single Writer locks at the graph level, dataset level, and server level.

Parses RDF/XML

Twice now I’ve come across products that fail to parse perfectly valid RDF/XML. We aren’t talking complex RDF/XML structures either, just simple Literals and XMLLiterals that contain non ASCII data, or XML mixed content.

Installable

A database server should not require me to write software in order to use it. Simple command line clients and simple start stop scripts should not be too much to ask.

SPARQL EXPLAIN

SPARQL query optimizers tend to be odd beasts. I have found that it’s really easy to go from a query that runs in no measurable time to one that will, for all practical purposes, never complete. Understanding why with EXPLAIN is hard, without EXPLAIN impossible. Profiling would of course be better, but I’ll take what I can get.

Documented

If your RDF database supports a feature but doesn’t document the feature anywhere, it doesn’t support the feature. I’m should not need to read source code to find out what SPARQL syntax and extensions the database supports. If your products is a closed source RDF database Documentation should really be at the top of this list as I can’t figure it out for myself by reading the code.

License

I get that databases are big money. I know Oracle owns MySQL now. It doesn’t matter. Rails, Django, Pylons, Wicket, insert favorite SQL based web framework here would not have existed without a good enough SQL database like MySQL or Postgres. A semantic web framework will be hobbled by only high cost commercial backends. A commercial database means that we can’t contribute fixes even if we want to.

Conclusion

I’ll deal, and do deal with a lack of most of these. But each time one of these features is missing it’s harder and harder for me to sell the idea of using RDF and semantic databases to management. The benefits of using RDF do in fact make up for missing tons of these features, but if we want RDF to accepted as a model for day to day development on a par with SQL databases these need to be addressed.

Prefixes, not that complicated.

The use of prefixes that can be bound to arbitrary strings then combined with
other strings to form a third set of string is IMHO too complicated for a
technology intended for broad Web deployment (e.g. in text/html).

Bug 7670 – Use of prefixes is too complicated for a Web technology

As stated I’m not really convinced. A few nights ago when I first saw this I was slightly upset, my wife asked what was wrong. I ended up bring a pad of paper to bed in order to explain. Nothing like XML/RDF prefixes for pillow talk. I should point out that my wife knows only some HTML and next to nothing about RDF or XML. She does have a degree (German and History, it’s important, we’ll get back to this), I don’t.

Anyway, as it turns out it’s reasonably easy to write triples on paper using N3 notation. After about 10 minutes my wife was having no trouble understanding how to write statements like “The article about Michelle Obama on The Drudge Report was issued on 2009-09-18.” Then on a new page I wrote down a changed a prefix definition! THE HORRORS! dc: now stood for http://www.dccomics.com, “Well, that’s not the same dc, so I guess I need to use another prefix for the Dublin thingy.” Not that confused then. She pointed out that the citation method for The Chicago Manual of Style used by a wide range of disciplines and a wide range of people is far more complicated. It has features that would horrify the HTML WG. Tokens whose meaning is 100% dependent on scanning backwards for the last instance of another token (ibid), so that while copy editing it’s massively important to keep track of them when moving blocks of text around. Shortened person and book names that are document dependent, are the norm. Yet hundreds of thousands of people are able to use this complex citation method. Do people screw it up? All the time. Do they understand why and fix it? Of course.

We were able to come up with rules that make using prefixes in almost any context simpler. Note, these are for the most part AUTHORING guidelines, not requirements when reading:

  1. Reusing the same prefix in the same document with different meanings is horribly confusing (“If you did that, I’d break your figures.”). Possible to figure out, but not really desirable. Seems like a reasonable place for a warning.
  2. Defining all the prefixes in one place makes it simpler to keep track of them. But understood when it would be simpler to define a new prefix for a section of content.
  3. “Couldn’t you have a simple tool that just shows you what prefixes are defined at any point in the document?” How such a tool has failed to exist in the XML world… may write this.

I really don’t think prefixing as too complicated for wide adoption. Document authors today deal with complex style guides like The Chicago Manual of Style, the Modern Language Association, and APA. All of these are at least as complicated as the notion of prefixing, some more complicated.

Understanding Microdata, now with more understanding

Okay, I’m giving this whole HTML5 microdata a shot for real. 4475 HTML pages of it in fact.

After reading what Hixie wrote, tried creating a new sample. Didn’t feel any stranger then the markup for RDFa. From a typing perspective, it is annoying to have to type the whole URI, from a reading perspective it’s much clearer what’s going on, at least to me. I also won’t be typing these every time, template systems are neat like that. Now as far as I can tell the XHTMLy solution to this using XML entities is not supported in HTML5.

Should have known better then to jump right from the sample into doing template markup. Having spent the morning making sure that I was producing valid HTML5, the addition of microdata caused errors at Validator.nu. It seems the method of using <link> elements is not currently supported by the validator. In fact even our sample fails to validate. Ugh.

Philip’s parser works nicely for testing to see if what I have is working, given that I can’t use the validator.

At this point I have markup for the relationships of Manifestations of an edition (Expression). Adding the markup for the publication dates was much more unpleasant. It seems that I have to repeat the whole:

<some_tag
     itemprop="http://vocab.org/frbr/core#embodiment"
     item="http://purl.org/vocab/frbr/core#Manifestation">
     <link itemprop="about" href="${product.subject}">

every time I need to talk about ${product.subject}. And the microdata parser happily adds the relationship all over again.

  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596007683.BOOK> ;
  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596806316.BOOK> ;
  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596802189.EBOOK> ;
  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596802028.SAF> ;
  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596007683.BOOK> ;
  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596806316.BOOK> ;
  <http://vocab.org/frbr/core#embodiment> <urn:x-domain:oreilly.com:product:9780596802189.EBOOK> .
    .

There is a great deal of markup smell coming from this page now. I think from here I’m going to try this as XHTML (5?) and go back to RDFa and see how that goes.

Will send this and earlier post to WhatWG mailing list as soon as my subscription to the WhatWG mailing list is approved.

Trying to understand Microdata? RDFa?

Been trying to follow the RDFa, microdata messwork. This isn’t academic. I have a nice open ticket that says “Insert inline metadata into O’Reilly Catalog pages” which is due in a large release at the end of September.

Do I expect Google to index my page a whole lot better? Nah. (That’s why we’re doing complete HTML chapters of our books, and full HTML Table of Contents). Do I expect our internal tools to index it better? Maybe, if I pray to the right search gods. Can I think of some some crazy shit to do in jQuery with the few attributes I have in there? Oh yes. What exactly is going to come of us putting micodata in our pages? No clue, but then we didn’t really know what Web 2.0 was in 2004, or this strangeWorld Wide Web ( Online Whole Internet Catalog, in which we uh, printed the internet) thing was in 1992.

Lets get started. I know what metadata I need to express. Here is a short version of it expressed in Turtle. There are a number of other fields, but this will give you the gist.

@prefix dc: <http://purl.org/dc/terms/> .
@prefix frbr: <http://purl.org/vocab/frbr/core#> .

<http://purl.oreilly.com/works/45U8QJGZSQKDH8N> a frbr:Work ;
     dc:creator "Wil Wheaton"@en ;
     dc:title "Just a Geek"@en ;
     frbr:realization <http://purl.oreilly.com/products/9780596007683.BOOK>,
         <http://purl.oreilly.com/products/9780596802189.EBOOK> . 

<http://purl.oreilly.com/products/9780596007683.BOOK> a frbr:Expression ;
     dc:type <http://purl.oreilly.com/product-types/BOOK> . 

<http://purl.oreilly.com/products/9780596802189.EBOOK> a frbr:Expression ;
     dc:type <http://purl.oreilly.com/product-types/EBOOK> .

This sample uses two vocabularies that exist in the wild. Dublin Core, which is a very mature standard developed by a reasonably heavy weight process with many serializations, and uses. FRBR too is a standard developed by a rather austere body the International Federation of Library Associations and Institutions the RDF realization of it however isn’t from them but rather a few guys who needed to represent it. Reasonably smart few guys, but no giant standards body here.

Took about 15 minutes to whip up a simple RDFa based representation. Now, I know RDF reasonably well, XML very well, and have decent HTML skills. So I admit my experience is not going to be the norm, but it didn’t feel a whole lot harder then the first time I was trying to use hCard. I screwed up a few times, mixing up where to use rel= vs. property=. I also forgot that I can’t just stick a <UL> in another <UL>, need the picky <LI>, also left off at least one close tag. Made all those mistakes in just 32 lines of HTML. But a few quick iterations with validation and it was all green check boxes. I screwed up my late night hand written HTML at about the same rate I screwed up RDFa attributes. I had read the RDFa primer two months ago, but didn’t remember much other then there were some attributes and they went on some tags. Didn’t use the primer, just looked at the example content from RDFa4Google. Used Elias Torres RDFa parser to test my results and validator.w3.org for my HTML.

Felt reasonably happy with my RDFa result. Worked as expected. Microdata time!

Okay, got my Microdata spec. Finding a validator or parser however did not go well. 5 minutes in Google and Bing, turned up the expected HTML5 validator.nu but nothing in the way of a microdata validator or parser. I’ll be honest I was very tempted to stop here. Given the mistakes I made with RDFa, I’m very skeptical of my ability to write Microdata without the help of a parser. But I imagine there is one, and once I post this someone will tweet about it 5 minutes later.

Huh, okay, I have my outer item for the Work:

<div id="http://purl.oreilly.com/works/45U8QJGZSQKDH8N" 
                item="http://purl.org/vocab/frbr/core#Work">
    <ul>
        <li><label>Title:</label>
          <span itemprop="http://purl.org/dc/terms/title">
            Just a Geek</span></li>
        <li><label>By</label>
          <span itemprop="http://purl.org/dc/terms/creator">
            Wil Wheaton</span></li>

That wasn’t very hard at all. I’m completely lost at how to relate that work to the two expressions however. It looks like I’m limited to my microdata being in an <a> tag link to the expressions. And I really don’t understand the idea behind:

The value is the element’s textContent.

Does this mean I can’t use any data that isn’t displayed directly on the page? If the data would be better expressed in a machine readable form? In my case product type http://purl.oreilly.com/product-types/EBOOK really isn’t very human friendly. Ideas on how to express the same metadata or equivalent in microdata are very welcome. This is the best I could do.

I was expecting more tooling and examples from Microdata given it’s inclusion in HTML5. I was very surprised by the lack of tooling and almost complete lack real world examples.

shutdown -r now

Running the most recent Ubuntu builds leads to odd things happening from time to time. This mornings little bit of joy was:

-bash: shutdown: command not found

It seems that somewhere in the last aptitude dist-upgrade the upstart package had been dropped from the selected list. For those wondering upstart provides a large number of the functions need for init.d, and the general run level management process. There used to be a package for systemv level compatibility where shutdown binaries lived, that package no longer existed. Simple solution in the end, just install upstart again.

Why do I bring this up on the blog? Well, it was time to restart the blog again. See a few weeks ago I upgraded it too, to WordPress 2.8.1, or rather WordPress thought it upgraded itself to 2.8.1. What really happened is it completely broke itself to the point that it wasn’t even logging any errors. This is a nice clean fresh install of WordPress 2.8.1

I do love upgrading software on computers, always thrilling.

Returning a Power Adapter for an External Hard drive

Replying to your email can you please provide me with the following information that the RMA department is required for us to ask our customers.

It’s a power adapter, and the cable doesn’t fit in the power adapter socket.

Computer Type:

Well, some of my computers are White, some are Black, two are Metal, and one is blue, yet another is two tone Green. Some are Mac’s, some are Intel Macs, some are IBMs, some are ASUS, a few are from Nokia… is this relevant?

Speed that your running:

Fast? Away for ever buying from OWC again? I guess I’m also sitting down, so I may not be running when you read this.

Operating System:

It’s a power adapter! It doesn’t have an OS. If it does I am very very afraid of it. There are however computers that I own that run Mac OS X, Windows, Linux (Ubuntu 9.04), QNX, and other OSes, exactly how this has anything to do with a power adapter is somewhat beyond me.

Thanks,

… someone may want to update what ever computer program (I assume) requires those fields.

Gratuitous Baby Photos

Since I finally managed to get the blog started a few things have happened. The most important being that back on January 20th I became a father! My wife points out that it is now April and that I said before that I would post once a week at least. Right, didn’t do that. But still doing better then my LJ account, 3 posts in 5 years. I may not be cut out for the blogging thing. Anyway, baby pictures!

Oliver prepares for his day out

My wife, a much better blogger tells me I should link to the photo album here. I think I’m getting the hang of this. I’m really much better writing blogging software, she’s really much better at using it.