Author's Archive

Make It Sing

blacksmiths2

I have a Jeep about half my own age, and despite the creaks in both our joints, we somehow manage to create a semblance of grace now and then. The vibration of the engine, transmitted through my the bones of my foot as it lies on the clutch (lightly enough not to feather it), or the degree and delta of centripetal force (unconsciously, I lean left to align my head with this off-axis down) explain wordlessly to me the limitations of the tires’ grip as I round a frosty curve, the elusive triple point that lies between momentum, throttle, and gearing. And I’m no racing driver — you have this loop, too, whether you drive a manual or automatic, whether you maneuver aggressively or defensively. It’s something that happens when you and the car reach an accord, so to speak.

A few Christmases ago I bought the family a great old axe, but at first its unfamiliarly short and straight haft made me more likely to split my own foot than the morsel of wood awaiting its sentence before me. Over the course of a few dozen swings I found it didn’t want to be wielded like an executioner’s axe, describing as many degrees of a circle as were warranted by the toughness of the wood, but it preferred to be brought down straight, like the guillotine. This necessitated a totally new movement of my hands and body but eventually it struck with greater power and precision than I had been able to muster with its modern, long-necked predecessor.

Between me and my Cherokee, and between my hands and the tool, and between you and many of the things you use every day, there is a complicated but elegant feedback loop, a physical dialogue, the topic of which is harmony of operation. The relationship that you build with a device, whether it’s a car, a hammer, a brush, a cello, or anything else, is a self-optimizing relationship. First you make it speak, then you make it sing.

Why does this matter? Because so few of the devices we are adopting today will ever sing like that.

It’s not just that things are complex. Driving a car is complex; the forces, sounds, visual input, motor coordination and everything else that goes into driving become second nature because we learn to operate the vehicle as an extension of ourselves. And it’s not just that things are virtual. Anyone who has had a complicated workflow and found themselves the master of ten windows spread over three monitors and two operating systems has juggled a dozen tasks and ideas, performing as complex a task as an orchestra conductor or jet pilot.

The problem is that we are introducing process that have maxima we can’t minimize, and minima we can’t maximize, by our own efforts. No axe is so difficult to use that you can’t master it in time. But no matter how good you are at using a smartphone, the elegance and quality of your process is, fundamentally, out of your hands.

With what devices and services today can you achieve the same level of synchrony as that you enjoy with your car as you parallel park, your fork and knife as you herd peas around your plate, your keyboard as you tap out a caustic response to this article at five characters per second?

I see exceptions for coders, who achieve a sort of second sight with the colors and symbology of their language of choice, for gamers whose thumbs make analog sticks and 256-stage buttons dance through a hell of bullets, and for photographers, their fingers blindly yet unfailingly seeking out dials and switches while the brain simultaneously calculates the arc of a ball or the fraction of a second left until the toddler’s smile strikes its apex.

But the most ubiquitous device of the modern digital era, the smartphone, is not susceptible to such talents. It may be always in your hand, but it never acts as an extension of it.

Oh, sure, you can learn the quickest way to get a picture through retouching and into Instagram — the “Save changes,” “Send to…” and “Submit” button positions memorized, the geotag set to automatic, the service sniffer set to repost and promote the latest item at the requisite SoLoMo watering holes. Congratulations, you’ve built a Rube Goldberg machine that mechanically duplicates button pressing. And what a profoundly inelegant series of arbitrarily-placed button presses it is, interrupted by unskippable dialogues, animations, and workarounds it is!

Have you ever remarked on the grace with which an iPhone user closes down unused processes? The casual dignity of a flick to bring down a notifications shade, the inhuman rapidity with which a home button is double-pressed? Of course not. You could practice button-pressing and menu flicking for weeks and your flicks and presses would be little or no more effective than anyone else’s.

Wearables? True, gestural tech and limb tracking like that of the Kinect or Myo adds an interesting new way to interact, but these things are meant to capture gross, simple, or repetitive movements; even if the nearly imperceptible twist of the wrist employed by a painter to add an ironic curl to the lips could be detected, would it matter? The threshold for whatever gesture he has indicated was reached long before such subtleties were taken into account. You think a photo will show more detail because you pinch-zoomed exactly along the 45-degree line? You think a page will load faster because you clicked at the exact center of the link?

As one last example: even in photography is the satisfaction of successful operation being eroded. Many lenses and systems do not actually connect the focus ring to the focal gearing, but instead read the position of the ring digitally, pass that information to the CPU, where its scale, jitter, and acceleration are weighed; this data is returned to the chip in the lens, which adjusts the focus approximately the amount it thinks you would have wanted it to move, had it been mechanical to begin with. Naturally this takes time and is rarely satisfying or accurate. But even if it were advanced to the degree it were imperceptible, it would still be inferior to the mechanical process because it is a simulation of it; if it advanced beyond this, and predicted your focal point (let us, against all odds, assume this works flawlessly), it is no longer you operating the mechanism or the simulation of a mechanism, but rather using a ring-shaped menu to select from a list of subjects. Just try to make that sing.

There’s no room for finesse or subtlety in these things because we are not the ones performing the work, or rather, we perform only a small part of it and set into motion a series of events over which we have little or no control. The digitizer, the processor, the transceiver, the microwave repeater, and the server do their work following, but independent of, our input. And before we could even do our part, the developer of the app, the developer of the firmware, the developer of the OS had to do theirs. Layer upon layer of things that you are not doing, that you can never effect, only activate.

I don’t pretend this is the end of doing things well, of course, or any other such absurd extrapolation. But I myself, and I suspect this is true of many others, get no little satisfaction from the process of doing things well, though, and here before us is a generation of tools which can only be instructed to carry out tasks, something you and I will never do better or worse than one another. Egalitarian? Democratic? That’s a charitable interpretation, if you ask me. Eliminating the necessity of doing something well could be a positive change. Eliminating the possibility of doing something well is a negative one.

Still, it’s not so dire as I make it sound. The consequence of all this is that there is more room to excel on a different stage, a higher one. If everyone has access to the same resources, it is the one who makes the best of them who takes the prize. Given the finest ingredients and top-notch facilities, no two chefs will produce the same meal. With the same light and the same camera, two photographers capture images that are worlds apart. So this embarrassment of riches comprising (among a hundred other things) the Internet, the social media landscape, and our fantastically powerful mobile devices is nevertheless empowering — but it is no longer the tools with which we interact with that we must make to sing, but what we are making with them.

No one can use the Facebook app better than another — but one may use the network to greater effect. No one can apply a filter with more finesse than another — but one may assemble a superior portfolio. No one can make an API return different data than another — but one may put that data to better use. No one can propagate an email through the network faster — but one may be more persuasive. The axe swings itself — but you can still build a better fire.

Comments Off

Photo

Devin Coldewey

February 10th

Gadgets

Parlor Tricks

legerdemain

CES looms, as it frequently does, and soon we will all be awash in the deluge; the annual international carnival of gadgetry shows no sign of slowing. But beyond this yearly cycle, a longer pattern is about to reach an inflection point.

Mainstream technology is not exactly a paragon of ingenuity. The advances that trickle down to us as consumers are quite prosaic, really, compared to the high-risk world of startups (a few of them, anyway) or the churning erudition of academia and serious R&D. This lack of ingenuity manifests itself in a dozen ways, from acquisition culture to a general failure to grasp the zeitgeist, but the one I think matters the most at the moment is the tendency to advance by accretion.

Basically, it’s bullet-list syndrome. When the underlying technology doesn’t change much, one adds features so that people think the new thing is better than the old thing. Cars have always been a good example of this: a phase occurs between major changes (the seatbelt, for instance, or electronic fuel injection, or dash computers) when manufacturers compete on widgets, add-ons, luxuries, customizations — things inconsequential in themselves, but a moonroof or short-throw shifter is a useful psychological tool to make the pot look sweeter without adding any honey.

That’s what we’ve been seeing the last few years in consumer tech. Certainly there have been quantitative improvements in a few individual components, notably displays, wireless bandwidth, and processors, but beyond that our computers, phones, tablets, hi-fis, headsets, routers, coffee makers, refrigerators, webcams, and so on have remained largely the same.

Of course, one may reasonably say, that could be because of the greater amount of “innovation” being achieved in the area of software. But innovation isn’t a limited quantity that must be expended in one direction or another. Besides, Internet-connected apps and services have blown up mostly because of ubiquity, a consequence of ease of adoption, itself a result of microprocessors and flash storage reaching a certain efficiency and price.

At any rate, stagnation is occurring, which historically can be recognized by how different you are told things are. The iPhone and the Galaxy S 4 — what could be less alike, judging by the Super Bowl ads to which we will no doubt soon be subjected? Except they perform the exact same tasks, using almost identical interactions, access the same 10 or 20 major Internet services, and, in many important ways, are as physically indistinguishable as two peas in a pod.

The aspects in which we are told they differ, from pixel density to virtual assistant quality to wireless speed, are red herrings designed to draw the consumer’s attention; like a laugh track or “applause” sign, they’re signals that these, and not the innumerable similarities, are what you must consider. That they are not self-evident and you must therefore be told about them is testament to their negligibility.

These parlor tricks Apple and Nokia and Samsung are attempting to foist upon a neophilic customer base that desperately wants real magic, but which will accept sleight of hand if it’s convincing enough.

Tablets, too, are this way, and TVs, and fitness bracelets, and laptops, and gaming consoles, and so on and so forth.

This isn’t exactly a problem for consumers, since generally it means things have reached a high degree of effectiveness. I don’t know if you’ve noticed, but everything is great! TVs are huge and have excellent pictures. You have coverage just about everywhere and can watch HBO shows in HD on your phone on the train. Laptops can do serious work, even cheap ones, and not just Excel and email — video editing, high quality gaming.

But when everything is great, people stop buying versions of things. And if you can’t do to the iPhone what the iPhone did to the Treo, you need to start putting bullets on lists.

Yet at some point, the list gets so long that people stop reading it, or else stop believing it. This is the inflection point I think we’re approaching. No one bought fridges that tweet whenever they’re opened, and no one buys a Galaxy S 4 because of some obscure networked dual-camera selfie stamp book, or whatever other garbage they’ve crammed into that awful thing.

At some point, things have to change in more ways than more. Sometimes less is the answer (as I’ve written perhaps too often), even within high tech: the Kindle, for instance, was (and remains) a very limited device; originally it wasn’t even better than the paperbacks it was meant to replace. And the original iPhone, let us not forget, was notoriously feature-poor, lacking rudimentary functionality found in flip phones worldwide. But both were very new in that they leveraged a powerful and promising technology to change the way people thought about what devices could be used for.

The next logical step along the path of proliferation (due to small, cheap microprocessors and memory) after devices that do a lot is devices that do too much — and after that, it’s devices that do very little. This last is the category that is making its real debut this year, in the guise of “wearables” and, more broadly, the “Internet of things.” The fundamental idea here is imbuing simple things with simple intelligence, though trifles like digital pedometers and proximity-aware dongles look for all the world like parlor tricks. There is reason to think that this trend will in fact create something truly new and interesting, even if the early results are a little precious.

Punctuated equilibrium is the rule in tech, and we haven’t seen any decisive punctuation in quite some time. Meanwhile the bland run-on sentence encompassing today’s most common consumer electronics is growing ungrammatical as the additions make less and less sense. And my guess is it will drone on for another couple years (not unlike some columns).

What will jump-start the next phase? Is it, as some suggest, the ascension of coffee mugs, toasters, and keychains to a digital sentience? Will it accommodate and embrace the past or make a clean break? Have we heard of it, or is it taking shape in the obscure skunkworks of Apple or IBM? I don’t know — and I suspect the prestidigitators at CES don’t know either.


Comments Off

Photo

Devin Coldewey

January 5th

Gadgets

256 Shades Of Grey

shades_grey

I want a black and white computer, and I don’t want it out of sheer, wanton weirdness. I actually think it’s a good idea. Here’s why.

A huge, huge proportion of the content we consume every day is text. And, for many, an equal proportion of what they work with is text — be it code, email, or published content like this. For the consumption and creation of text, a monochrome display is all that is necessary, and in some ways even superior to a color one.

Pixels on an LCD like the one on which you’re probably reading this are made up of dots or sub-pixels — usually one red, one green, and one blue. The transistor matrix changes the opacity of a sub-pixel of a given color, and by working together they can create millions of hues and shades. But they work (with a few exceptions such as sub-pixel font smoothing and pentile layouts) only as triads, meaning a display with a resolution of 1920×1080 addressable pixels has three times that many addressable dots. (This is the reason why simply desaturating the image does not improve the resolution.)

Consequently, if you were to remove the color filters, each sub-pixel would become a pixel — all only able to show shades of grey, of course, but pixels nonetheless, and far more of them than there were before. Result: extremely high spatial resolution, far beyond the so-called “retina” point, even at close range — beyond even glossy magazine levels of sharpness, a dream for rendering type. (The two previous paragraphs previously contained miscalculations as to the pixel density, which have since been amended)

It would also be brighter, or put another way, would require less backlight, since the removal of the filters allows far more light to pass through. That saves battery. Also saving battery is the reduced amount of graphics processing power and RAM necessary to store and alter the screen state, and so on. Small things, but not insignificant.

It would, of course, retain all of the other benefits of a modern, connected device, remaining as responsive and powerful as any other laptop or tablet, just minus the color. Logistically speaking, adapting existing content would not be that problematic (“time-shifting” apps and other extractors already do this). And it’s more than a glorified e-reader: the limitations of that type of hardware are lethal to many of the methods in which we are now accustomed to finding, consuming, and creating content (to say nothing of the screen quality).

Why black and white? Well, why color?

But what the hell is the point, you ask, if it’s not in color? The web is in color. The world is in color!

Your Instagram feed won’t be quite as striking in greyscale, it’s true. Rich media wasn’t designed for monochrome, and shouldn’t be forced into it. It demands color, and deserves it. Obviously you wouldn’t want to browse Reddit or edit video on a monochrome display. But if something does not require color, it seems pointless to provide it, especially when doing so has real drawbacks.

You’ve seen the apps that prevent procrastination, or make the user focus on a task, by blocking out distractions and the like. At some times, we want a tool that does one thing, and at other times, we want a tool that does others. That’s why computers are so great: They can switch between, say, text-focused work mode and image-focused movie mode in an instant.

They’re like Swiss Army knives: a corkscrew one minute and a can opener the next. But, as I tried to suggest in my previous column, if you tend to open a lot of wine bottles and very few cans, wouldn’t you prefer that you had a dedicated wine opener, without a bunch of other tools attached? That it can’t open a can is tragic, but more than made up for by its facility in its chosen task.

There will always be a place for the essential alone

I believe some people would not only be unperturbed by an inability to watch videos or what have you — in fact, they may prefer it. We already have different computing tools for different purposes, and we don’t demand that they all do everything — I have a laptop so I can write, as I am at the present, while enjoying some fresh air and coffee. I have a desktop for games and heavy productivity. I have an iPad for this, and an e-reader for that, and a phone for this, and a camera for that. What’s one more, especially when it would be, I believe, quite good at what it does, even if that’s “only” working with text?

There’s also a less practical, more aesthetic reason I would enjoy a black and white device. The content we consume and the ways we navigate it have become loud and colorful, and to me it does not appear that this profusion of saturation has been accompanied by a corresponding subtlety of design. The eruption of capabilities has made many lose touch with the beauty of austerity, and what’s billed as “minimalism” rarely is. There is a set of qualities that sets that starkness apart, and while we have always enjoyed ornamentation, there has always been (and will be for the foreseeable future) a place and purpose for the essential alone.

On that note, I think it would be an interesting experiment, and highly beneficial one, to attempt to rebuild, say, Facebook or an OS, without any color at all. When you subtract color cues like green for yes and red for no, or implicit boundaries based not on contrast and flow but on different coloration, the problem of presenting and consuming the information concerned is totally changed. Perhaps one would learn better the fundamentals of layout, flow, proportion, and so on, and that would inform the color world as well.

I read a lot, and I write for a living. I want a specialized tool for doing those things, just as a logger would want an axe instead of a big knife, or a runner a good pair of shoes instead of slippers. In the end, I like the idea of a black-and-white device and interface for many of the reasons I like black-and-white photography. It’s different, and has different strengths, and both requires and provides a different perspective. For me, that’s enough to at least want it on the table.


Comments Off

Photo

Devin Coldewey

March 31st

Gadgets

Circuit Breaker

Old light switches

There’s something I’ve been hoping to encounter over the years of writing about tech and gadgets that never seems to materialize: A hardware switch to disconnect my device from all outside communication. Call me paranoid, but airplane mode just isn’t good enough for me. Such a switch for wireless (or for the camera, or the microphone) seems to me an elementary protection against a number of potential dangers, and I doubt I’m the only one who would appreciate it.

It’s not that I think The Man is secretly tracking my phone at all times, even when I’m in airplane mode. If anything, He and the companies we all pay for data connections are doing so relatively openly! That’s expected now, and circumvented in other ways. It’s just a matter of trust in a number of parties’ honesty and competency.

You trust Apple or Google or whoever makes your phone or laptop to successfully shut off the wireless in your device when you ask it to. And that trust probably isn’t misplaced — failing to do so would have incurred the wrath of the FAA and any number of privacy and security organizations. In the same way, you also trust that when the LED isn’t lit, your camera isn’t active, and likewise the microphone.

But it isn’t much in the way of fantasy to imagine an emergency signal that wakes up a component, just as there are signals and techniques being patented to turn them off. If Apple is considering (and probably engineering) a means to shut down your camera so it can’t take pictures of copyrighted works, aren’t immediate extrapolations from that a legitimate concern?

So why not have a way to totally shut down components of a device? There’s no trust necessary if you yourself can see that the method of providing power to the wireless chip or camera module has been interrupted.

Not everyone cares, of course. But be honest: How many of you with discrete webcams have them pointed anywhere but at you right now? How many of you are always aware of the presence of the unblinking, cyclopean electronic eye above your laptop’s screen? Have you never considered how easy it might be to hijack the microphone or camera for a hacker or, for that matter, someone lawfully observing you using means graciously provided by the creator of the OS? Carrier IQ, anybody? FBI begging software companies for government backdoor privileges?

It’s not paranoia to have a chain lock as well as a deadbolt — redundancy is just a part of good security practices.

When you’re protecting your bank account, or your email, you don’t hesitate to ask for two-factor authentication. One would think that when setting up your daughter’s webcam or phone, you’d be able to take similarly thorough steps. Perhaps even with the pervasion of smartphones and other connected devices in our homes and on our persons, not enough people are aware of the fact that the only lock on their digital devices is one frequently exposed, indeed advertised, to the online world. To have a switch under your thumb that renders your device inaccessible to the physical phenomenon used to operate it is the ultimate protection. That people aren’t clamoring for it is honestly surprising to me.

Unfortunately, I doubt it will happen for a number of reasons. It’s troublesome for the user to have to worry about it, for one thing, and most would ignore it. It also undermines trust in the OS and its security — would you buy a lock from a guy who said “maybe you should get this one too, just in case”? And technically speaking, shutting off and restarting a component constantly (especially in system-on-a-chip architecture) is not trivial. It’s doubtful manufacturers will decide to isolate certain portions just so you can power them on and off at will (again, not a simple process).

Still, I can dream. I’ve always felt the need to exert control over my devices, and I am frustrated at every point along the frontier where my privileges as a user end. I have faith, at least, in people of like mind but more capable, to either provide such security measures as will satisfy those even more suspicious than myself, or to convince me of their superfluity.

[Image: Paul Cross / Flickr]


Comments Off

Photo

Devin Coldewey

January 6th

Gadgets

Mobile

Circuit Breaker

Old light switches

There’s something I’ve been hoping to encounter over the years of writing about tech and gadgets that never seems to materialize: A hardware switch to disconnect my device from all outside communication. Call me paranoid, but airplane mode just isn’t good enough for me. Such a switch for wireless (or for the camera, or the microphone) seems to me an elementary protection against a number of potential dangers, and I doubt I’m the only one who would appreciate it.

It’s not that I think The Man is secretly tracking my phone at all times, even when I’m in airplane mode. If anything, He and the companies we all pay for data connections are doing so relatively openly! That’s expected now, and circumvented in other ways. It’s just a matter of trust in a number of parties’ honesty and competency.

You trust Apple or Google or whoever makes your phone or laptop to successfully shut off the wireless in your device when you ask it to. And that trust probably isn’t misplaced — failing to do so would have incurred the wrath of the FAA and any number of privacy and security organizations. In the same way, you also trust that when the LED isn’t lit, your camera isn’t active, and likewise the microphone.

But it isn’t much in the way of fantasy to imagine an emergency signal that wakes up a component, just as there are signals and techniques being patented to turn them off. If Apple is considering (and probably engineering) a means to shut down your camera so it can’t take pictures of copyrighted works, aren’t immediate extrapolations from that a legitimate concern?

So why not have a way to totally shut down components of a device? There’s no trust necessary if you yourself can see that the method of providing power to the wireless chip or camera module has been interrupted.

Not everyone cares, of course. But be honest: How many of you with discrete webcams have them pointed anywhere but at you right now? How many of you are always aware of the presence of the unblinking, cyclopean electronic eye above your laptop’s screen? Have you never considered how easy it might be to hijack the microphone or camera for a hacker or, for that matter, someone lawfully observing you using means graciously provided by the creator of the OS? Carrier IQ, anybody? FBI begging software companies for government backdoor privileges?

It’s not paranoia to have a chain lock as well as a deadbolt — redundancy is just a part of good security practices.

When you’re protecting your bank account, or your email, you don’t hesitate to ask for two-factor authentication. One would think that when setting up your daughter’s webcam or phone, you’d be able to take similarly thorough steps. Perhaps even with the pervasion of smartphones and other connected devices in our homes and on our persons, not enough people are aware of the fact that the only lock on their digital devices is one frequently exposed, indeed advertised, to the online world. To have a switch under your thumb that renders your device inaccessible to the physical phenomenon used to operate it is the ultimate protection. That people aren’t clamoring for it is honestly surprising to me.

Unfortunately, I doubt it will happen for a number of reasons. It’s troublesome for the user to have to worry about it, for one thing, and most would ignore it. It also undermines trust in the OS and its security — would you buy a lock from a guy who said “maybe you should get this one too, just in case”? And technically speaking, shutting off and restarting a component constantly (especially in system-on-a-chip architecture) is not trivial. It’s doubtful manufacturers will decide to isolate certain portions just so you can power them on and off at will (again, not a simple process).

Still, I can dream. I’ve always felt the need to exert control over my devices, and I am frustrated at every point along the frontier where my privileges as a user end. I have faith, at least, in people of like mind but more capable, to either provide such security measures as will satisfy those even more suspicious than myself, or to convince me of their superfluity.

[Image: Paul Cross / Flickr]


Comments Off

Photo

Devin Coldewey

January 6th

Gadgets

Mobile

“Gun” “Control”

liberator

Please note that this is a post about technology, not politics.

The tech industry cheerleads the displacement and reconfiguration of huge institutions like the music industry and telecoms. The arms industry shares many of the attributes of those industries, and is poised for fundamental change that is much like the changes they have experienced. If the product of the arms industry were not arms, the inevitable upheaval would be anticipated and prophesied with glee by the usual pundits (this website included).

It’s not, because the general availability of weapons is not something we as a community can agree on as an unmitigated good. For that matter, even free speech and assembly are by no means goals universally agreed upon. But advances in technology are providing all of these things, regardless of the preferences of any one group.

If we as a country, and indeed we as a global community, are going to seriously address the question of gun control, we need to address the issue of fabricated weapons and weapon plans, or else the discussion will be moot. This is because the proliferation of 3D printed weaponry changes both the definition of “gun” and of what it means to “control” it.

Gun

What is a gun? A barrel is not a gun, nor is a stock, or a sight, or a trigger. But at some point you put these and a few other objects together and you have a gun. As it turns out, strictly speaking, the receiver is how such things end up being defined in this country, at least as a rule of thumb. Buying, selling, and creating the receiver, into which a cartridge passes from the magazine and is prepared for discharge, is buying, selling, and creating a gun.

You may have read that there is already a 3D model of an AR receiver that can be printed, combined with other parts, and turned into a working firearm. The most recent news on that front was such a gun failing after firing just six rounds, leading to no small amount of derision online regarding the possibility of printed guns.

This allows people to ignore the issue, since if they aren’t making real guns, it’s not a real problem. In fact, some reading this probably consider the issue a little silly.

This skepticism is misplaced for two reasons.

First, the problem is strictly technical, and the team that made the gun was already analyzing and correcting for the problem by the end of the day. If they had a high-quality printer, they could have the improved part overnight, which is a capability that is changing other industries as well.

Second, the problem is not a problem. They created a working firearm. In World War II, the U.S. manufactured one million FP-45 Liberator handguns. These crude, single-shot pistols were designed to be dropped from the air by the thousand over occupied territory, to give the resistance there the advantage of a firearm, be it only for one shot. The fundamental difference was not between six shots and a hundred shots, but between zero shots and any shots at all.

A 3D-printed gun, were it only to fire one shot before melting or failing, is still a gun. After that, the difference is only in what kind of gun it is.

Of course printed guns don’t and won’t constitute the major part of the ideas in such a major and divisive debate as gun control. But that does not obviate the fact that we can print guns. We can do so today, and the ability to do so is only improving. It is very important to note that one need not take a side in the debate to acknowledge this. And it is very important that we acknowledge this now, so that we are not forced to acknowledge it later, when it will be too late to take either side.

Control

If you were to attempt to write a law governing media copyright in 1998, would you attempt to do so without acknowledging the existence of the Internet and compression methods like MPEG-1? Any law crafted under such restrictions would be laughably incomplete.

Likewise, if you were to discuss a law that allows or restricts the creation and distribution of firearms, would you attempt to do so without acknowledging the existence of 3D-printed weapons and the ability to transfer blueprints for them online?

Here’s the problem, though. Like the digitization of music, the digitization of objects, guns or otherwise, is a one-way street. Every step forward is ineffaceable. Once you can make an MP3 and share it online, that’s it, there’s no going back — the industry is changed, just like that. Why should it be different when you reduce a spoon, a replacement part, a patented tool, or a gun to a compact file that can be reproduced using widely-available hardware? There’s no going back. So what is “control” now?

Will ISPs use deep packet inspection to watch for gun files being traded? Will torrent sites hosting firearm files be taken down, their server rooms raided? Will all the ineffectual tactics of digital suppression be tried again, and fail again?

Will 3D printers refuse to print parts, the way 2D ones are supposed to refuse to print bills? Will printers have to register their devices, even when those devices can print themselves? How is it proposed that control is to be established over something that can be transferred in an instant to another country, and made with devices that will soon be as common as microwaves?

Part of the discussion has to be that, government or otherwise, there can be no more control over printed guns than there can be over printed spoons. Regulation or banning of firearms, whether you think the idea is good or bad, will soon be impossible.


This isn’t “the singularity is near” wishful thinking. We can print working guns right now. They fire bullets. The guns themselves are being improved, and the tools to create them are multiplying. The scale is small, but every defining technology starts as a niche.

This also isn’t “the sky is falling” alarmism. We have had to come to grips with other transformative technologies, destructive and constructive. The telegraph, the atomic bomb, the computer. We can deal with this development, and that’s good because we are going to have to one way or the other.

Lastly, this isn’t about taking sides, politically or culturally. This is only about looking at all the facts, not just the convenient ones. And while with other industries it may be accepted, even justifiable, to simply let events play out and reap the benefits meanwhile (as with the implosion of the music publishing industry), the same cannot be said about an industry that concerns not our entertainment or our means of expression, but rather (regardless of what side you take) our safety and security.


Comments Off

Photo

Devin Coldewey

December 17th

Gadgets

Laocoön

glider

Suppose you dropped your phone — a real fall, like from the second story — and it broke. You’re picking up the pieces, cursing and trying to think of the last time you backed up your contacts, when you notice something. Deep within the phone’s hardware, hidden from everyday use, you find a message — etched right onto the chassis.

What kind of message? Let’s say you found a Darwin fish, or the letters YHWH? Or perhaps something a little more difficult to decipher — a code or symbol of some kind, not an inventory number, but still something meant to be seen and read? What would you make of it?

This isn’t actually a hypothetical situation or something out of a Neal Stephenson book. Apple has actually done this — and the symbol they’ve chosen is as arcane and ominous as it is unmistakable.

That’s the symbol at the top of the post, in case you haven’t guessed. Do you recognize it? No cheating, now. You don’t need any tools to identify it. It’s not a code, not binary or anything. Do you give up?

It’s a glider.

Iteration

Conway’s “Game of Life” is one of the true legends of software, and a lasting monument to the power of procedural generation. The game is “played” on a grid of cells that turn on or off according to a few simple rules that imitate the real-life conditions of isolation, generation, and overpopulation. Each “step” applies these rules to every cell and the result is usually a very different-looking grid. Random messes of on and off cells will resolve themselves into shapes, expanding or contracting masses, islands, patterns, and so on.

It was played on some of the earliest computers, requiring as it did very little computing power and being both entertaining and mathematically edifying. It has produced some surprisingly complex behaviors, and is Turing complete. (If you’re curious, it is available for free in many places on the web, and indeed can be played on paper or with rocks, in the dirt)

One of the things the game produced was a number of arrangements of cells that could be relied on to propagate in a certain way. Some would spin, some would explode, some would neatly disappear. And some would travel; these were known as spaceships. There was one very simple spaceship, made of just five cells, that would move forever in a diagonal fashion. This was called a glider.

The glider could arguably be called the mascot of the Game of Life — that is, if the game ever needed one. It represents both the emergent complexity of the game and its extremely simple core nature.

There are certain symbols that are unintelligible to many, but instantly recognizable by a certain set of people, especially in certain contexts. A triangle stacked on the apices of two other triangles will to billions look like — some triangles (or a young Sierpinski fractal). But to practically anyone born in the 70s or 80s, it’s the Triforce, and it represents Nintendo, old school gaming, childhood, and so on. Or the number 42: just what comes between 41 and 43 for some, but for others, it’s a very important answer.

Certainly there are fewer who would find themselves affected by the arrival, as it were, of a glider outside of the Game of Life. But people who learned to code in assembly on monochrome screens, who soldered new contacts when the old ones burnt out, who count their Amigas among their treasured possessions, in short people who loved computing before you had to love computing to get by, — to these people, the glider is unmistakable. It is to them that the glider inside Apple’s computer is addressed, because it is only to them that it is even intelligible.

But there’s something else.

Still life

The glider, it was mentioned, is made from five cells. If you’ll scroll up, you’ll find that there is one more than that in Apple’s little frozen game. A stray cell, two rows over. Curious — what happens when you put this into the Game of Life?

The glider dies.

Yes, things can die too in the Game of Life. Groups of cells can reduce themselves to nothing, though this only happens under certain conditions and should perhaps be likened to extinction rather than death. More commonly, constellations of cells end up stagnating in shapes that could certainly be called stable, but are as distant from the constantly changing, fascinating dance of dots that defines the game as they can be without disappearing altogether. Technically the glider becomes a known static shape. But for this active and useful little craft with its wiggly diagonal propagation and useful character, it is as good as death.

Etched permanently into the body of one of the most advanced computers ever made, then, is a symbol of experimentation, tradition, and potential — being killed.

Can this really be the meaning of the symbol? Would Apple, or someone within Apple, really create such an Easter egg, loaded with strange and somewhat disturbing symbology? A tiny tableau frozen in a perpetual state of doom, like that of Laocoön and his sons? (His story is apt, even prophetic)

There isn’t enough information to lay it on Apple’s doorstep just yet. For one thing, there’s just the one Retina display that was taken apart, and maybe it is a tracking code that just happens to be in the shape of glider. Or maybe it wasn’t Apple that did it, but a sentimental coder at the aluminum mill. Or maybe the interpretation is wrong, and whoever drilled that doomed glider into the chassis was playing the Game at a higher level than this writer. Or maybe they had no idea.

Or maybe it’s deliberate, and someone is trying to say that this is the end of the line. But then who is that someone?

Am I digging too deeply into the meaning of this symbol? No. Symbols are like fractals, in that they bear almost infinite scrutiny. It is difficult to find the bottom of a symbol, because symbols are abysses of culture — attractors, not artifacts. The dying glider might be looked at from a hundred different angles and interpreted in a hundred different ways.

But whether it can be said to have significance in relation to Apple, the current trends in technology and culture, and the philosophy of consumption — to decide that is beyond the scope of this article, since the only evidence is the existence of the symbol in history and on this device.

What does it signify? Unless an answer falls out of the sky, this dying glider will remain an enigma: too purposeful to be meaningless, yet not explicit enough to be meaningful. But whether it turns out to be a manufacturing error or a declaration of war, it is not wrong to contemplate, though our contemplations be moot. As the nameless narrator of Poe’s “The Assignation” apostrophises his idealistic former comrade:

Who then shall call thy conduct into question? who blame thee for thy visionary hours, or denounce those occupations as a wasting away of life, which were but the overflowings of thine everlasting energies?

That it is food for thought is enough — that it has the potential to be subtly terrible is enough. For that matter, that it is an interesting history is enough. Read it how you will, and draw your own conclusions. But if you will, a moment of silence for the doomed glider.



Comments Off

Photo

Devin Coldewey

June 23rd

Apple

Laocoön

glider

Suppose you dropped your phone — a real fall, like from the second story — and it broke. You’re picking up the pieces, cursing and trying to think of the last time you backed up your contacts, when you notice something. Deep within the phone’s hardware, hidden from everyday use, you find a message — etched right onto the chassis.

What kind of message? Let’s say you found a Darwin fish, or the letters YHWH? Or perhaps something a little more difficult to decipher — a code or symbol of some kind, not an inventory number, but still something meant to be seen and read? What would you make of it?

This isn’t actually a hypothetical situation or something out of a Neal Stephenson book. Apple has actually done this — and the symbol they’ve chosen is as arcane and ominous as it is unmistakable.

That’s the symbol at the top of the post, in case you haven’t guessed. Do you recognize it? No cheating, now. You don’t need any tools to identify it. It’s not a code, not binary or anything. Do you give up?

It’s a glider.

Iteration

Conway’s “Game of Life” is one of the true legends of software, and a lasting monument to the power of procedural generation. The game is “played” on a grid of cells that turn on or off according to a few simple rules that imitate the real-life conditions of isolation, generation, and overpopulation. Each “step” applies these rules to every cell and the result is usually a very different-looking grid. Random messes of on and off cells will resolve themselves into shapes, expanding or contracting masses, islands, patterns, and so on.

It was played on some of the earliest computers, requiring as it did very little computing power and being both entertaining and mathematically edifying. It has produced some surprisingly complex behaviors, and is Turing complete. (If you’re curious, it is available for free in many places on the web, and indeed can be played on paper or with rocks, in the dirt)

One of the things the game produced was a number of arrangements of cells that could be relied on to propagate in a certain way. Some would spin, some would explode, some would neatly disappear. And some would travel; these were known as spaceships. There was one very simple spaceship, made of just five cells, that would move forever in a diagonal fashion. This was called a glider.

The glider could arguably be called the mascot of the Game of Life — that is, if the game ever needed one. It represents both the emergent complexity of the game and its extremely simple core nature.

There are certain symbols that are unintelligible to many, but instantly recognizable by a certain set of people, especially in certain contexts. A triangle stacked on the apices of two other triangles will to billions look like — some triangles (or a young Sierpinski fractal). But to practically anyone born in the 70s or 80s, it’s the Triforce, and it represents Nintendo, old school gaming, childhood, and so on. Or the number 42: just what comes between 41 and 43 for some, but for others, it’s a very important answer.

Certainly there are fewer who would find themselves affected by the arrival, as it were, of a glider outside of the Game of Life. But people who learned to code in assembly on monochrome screens, who soldered new contacts when the old ones burnt out, who count their Amigas among their treasured possessions, in short people who loved computing before you had to love computing to get by — to these people, the glider is unmistakable. It is to them that the glider inside Apple’s computer is addressed, because it is only to them that it is even intelligible.

But there’s something else.

Still life

The glider, it was mentioned, is made from five cells. If you’ll scroll up, you’ll find that there is one more than that in Apple’s little frozen game. A stray cell, two rows over. Curious — what happens when you put this into the Game of Life?

The glider dies.

Yes, things can die too in the Game of Life. Groups of cells can reduce themselves to nothing, though this only happens under certain conditions and should perhaps be likened to extinction rather than death. More commonly, constellations of cells end up stagnating in shapes that could certainly be called stable, but are as distant from the constantly changing, fascinating dance of dots that defines the game as they can be without disappearing altogether. Technically the glider becomes a known static shape. But for this active and useful little craft with its wiggly diagonal propagation and useful character, it is as good as death.

Etched permanently into the body of one of the most advanced computers ever made, then, is a symbol of experimentation, tradition, and potential — being killed.

Can this really be the meaning of the symbol? Would Apple, or someone within Apple, really create such an easter egg, loaded with strange and somewhat disturbing symbology? A tiny tableau frozen in a perpetual state of doom, like that of Laocoön and his sons? (His story is apt, even prophetic)

There isn’t enough information to lay it on Apple’s doorstep just yet. For one thing, there’s just the one Retina display that was taken apart, and maybe it is a tracking code that just happens to be in the shape of glider. Or maybe it wasn’t Apple that did it, but a sentimental coder at the aluminum mill. Or maybe the interpretation is wrong, and whoever drilled that doomed glider into the chassis was playing the Game at a higher level than this writer. Or maybe they had no idea.

Or maybe it’s deliberate, and someone is trying to say that this is the end of the line. But then who is that someone?

Am I digging too deeply into the meaning of this symbol? No. Symbols are like fractals, in that they bear almost infinite scrutiny. It is difficult to find the bottom of a symbol, because symbols are abysses of culture — attractors, not artifacts. The dying glider might be looked at from a hundred different angles and interpreted in a hundred different ways.

But whether it can be said to have significance in relation to Apple, the current trends in technology and culture, and the philosophy of consumption — to decide that is beyond the scope of this article, since the only evidence is the existence of the symbol in history and on this device.

What does it signify? Unless an answer falls out of the sky, this dying glider will remain an enigma: too purposeful to be meaningless, yet not explicit enough to be meaningful. But whether it turns out to be a manufacturing error or a declaration of war, it is not wrong to contemplate, though our contemplations be moot. As the nameless narrator of Poe’s “The Assignation” apostrophises his idealistic former comrade:

Who then shall call thy conduct into question? who blame thee for thy visionary hours, or denounce those occupations as a wasting away of life, which were but the overflowings of thine everlasting energies?

That it is food for thought is enough — that it has the potential to be subtly terrible is enough. For that matter, that it is an interesting history is enough. Read it how you will, and draw your own conclusions. But if you will, a moment of silence for the doomed glider.



Comments Off

Photo

Devin Coldewey

June 23rd

Apple

The Way Things Work

things1

Magic, they call it. And indeed we may add an appendix to that old saw: any sufficiently advanced, or sufficiently obscure, technology is indistinguishable from magic.

You must know the story of the Mechanical Turk. How princes and tradesmen were amazed by this ingenious device’s ability to play chess intelligently. In an age of steam and brass hinges! Yet at the time thousands were fooled. Had they known a bit more about machines, they might have realized it was not just improbable, but impossible.

The Mechanical Turks of our day aren’t designed for entertainment, but to be bought and used, yet a similar contrivance goes into preventing the secrets of their operation from being questioned. In fact, we are already at a time where it is more or less impossible for one person to understand or question them. Apple may be ahead of the curve on this trend, but while it appears they’ve been leading the industry by the nose, they in turn are being led by the inexorable forward motion of technology. Open hardware advocates fight the good fight, and they fight it valiantly, but defeat is inevitable.

And what would victory be, exactly? A laptop you can repair in the comfort of your home? Sounds good, to be sure — but how deep does that capability really go? If your hard drive breaks or your RAM is corrupted, will you pull out a magnifying glass and correct the faulty sectors with your electron drill? Adjust the drive head in your billion-dollar repair toolshop out back? No, you’ll order a new drive, new RAM, a new screen.

RAM used to be pieces too, you know. In an excellent (so far) book about the origins of the computer, Turing’s Cathedral, the mechanical nature of early computing machines is presented for your humble contemplation. ENIAC, for instance, had 17,468 vacuum tubes, 1500 relays, and 500,000 hand-soldered joints. Operation was complicated, but mechanical: if you weren’t careful, you might get your finger caught in the RAM. If something broke, you needed a wrench. Now a stored bit takes up so little space that if it gets much smaller it will cease to be governed by Newtonian physics.

This is the real problem. Technology actually is approaching the magic point. You want to know how your laptop works. You can’t know. Even the people who made it don’t know. Apple has to call up LG or Sharp when it wants a high-density display. LG has to call Samsung when they want MLC flash storage. Samsung has to call NVIDIA when they want graphics cores. NVIDIA has to call ARM to make SoC architecture. Vertical integration is a thing of the past because no company can do it all. It took Intel five years and billions of dollars to develop just the processor your laptop runs today. The whole system is the culmination of a century of work by geniuses and specialists. Control over your hardware is the flimsiest of illusions. You only understand the snow frosting the top of the iceberg, and even then all you can do to fix it is pay for more.

But that’s a bit of an academic (and existential) appraisal of the subject. Realistically speaking, there are better and poorer ways of creating a laptop, ways that enable such a device to last for five years instead of two, or to enable upgrades that cost a few hundred rather than a thousand dollars. The new Macs are, by some standards, the worst yet made.

Even this is on its way out, though. Integration and portability are the word now, not modularity, at least for the vast majority of users. Mobiles and tablets use SoC architecture that unifies logic, graphics, sound, and other functions all under the same chip for reasons of compatibility and power savings. I’ve assembled my own PCs for years, and I expect I’ll probably assemble one or two more, but even now it’s anachronistic, at least at the consumer level. Modular and open hardware (such as it is) will continue to exist, but as before they will only funnel into more usable, closed systems.

We’ve made this surrender many times. We surrendered control of our government to representatives because it’s better to have a few (ostensibly) informed individuals whose (nominal) duty it is to govern on our behalf. We surrendered control over our cars decades ago with electronically controlled fuel injection and timings, with parts we couldn’t fix or even reach, because it improves mileage and reliability. We surrendered control over the way we interact when we decided we’d use Facebook and text messages, because it’s convenient and fun. Each time we make a little bargain: we control less and we get more. Is anyone surprised it’s happening again?

We should certainly be able to do what we want after the fact. We can impeach our representatives, tweak our timings, and use Facebook to organize anti-Facebook rallies. And we can and should run our own programs, our own operating systems, do what we will with the platform we’ve bought.

The biggest threat is not to hardware, which has in truth been beyond the comprehension of users for decades, but to what we are allowed to do with it. Apple can solder their RAM and seal it with custom screws all they want. They are only creating the medium and in this case, the medium is not the message. Their computers are more locked down than others, but we mustn’t underestimate how locked down the others already were.

More troubling is the deeper marriage we are seeing between hardware and software. How many OS X and iOS-specific functions do you think lie beneath the placid mask of the A5 processor? How long before locked bootloaders and UEFI and intelligent cables prevent you from installing a new OS or streaming from non-approved sources?

For that matter, with virtualization of services and externalization of storage, how many steps are we adding between ourselves and the things we use? Running the software we want, even if it was on hardware we don’t understand, was one of our last strongholds. And now “our” software is running on other people’s hardware, people who give it to you for free and in return we… what, exactly? We don’t question that nearly enough.

The fight is not to control the hardware. The hardware has been out of our control for a long time. Despite that, hardware today, more complex and inaccessible than ever before, is more enabling and powerful than ever before. If you want a fight, don’t fight against technological progress, which constantly moves these things ever further out of your grasp. Whether you or Apple has to replace the drive or screen in your new MacBook Pro is immaterial. Whether Apple, or Amazon, or the MPAA, can stop you from using it the way you like is not. Forget the soldered RAM; there are those who would solder you down given a chance. They are the ones to fear, and therefore the ones to fight.



Comments Off

Photo

Devin Coldewey

June 16th

Uncategorized

The Way Things Work

things1

Magic, they call it. And indeed we may add an appendix to that old saw: any sufficiently advanced, or sufficiently obscure, technology is indistinguishable from magic.

You must know the story of the Mechanical Turk. How princes and tradesmen were amazed by this ingenious device’s ability to play chess intelligently. In an age of steam and brass hinges! Yet at the time thousands were fooled. Had they known a bit more about machines, they might have realized it was not just improbable, but impossible.

The Mechanical Turks of our day aren’t designed for entertainment, but to be bought and used, yet a similar goes into preventing the secrets of their operation from being questioned. In fact, we are already at a time where it is more or less impossible for one person to understand or question them. Apple may be ahead of the curve on this trend, but while it appears they’ve been leading the industry by the nose, they in turn are being led by the inexorable forward motion of technology. Open hardware advocates fight the good fight, and they fight it valiantly, but defeat is inevitable.

And what would victory be, exactly? A laptop you can repair in the comfort of your home? Sounds good, to be sure — but how deep does that capability really go? If your hard drive breaks or your RAM is corrupted, will you pull out a magnifying glass and correct the faulty sectors with your electron drill? Adjust the drive head in your billion-dollar repair toolshop out back? No, you’ll order a new drive, new RAM, a new screen.

RAM used to be pieces too, you know. In an excellent (so far) book about the origins of the computer, Turing’s Cathedral, the mechanical nature of early computing machines is presented for your humble contemplation. ENIAC, for instance, had 17,468 vacuum tubes, 1500 relays, and 500,000 hand-soldered joints. Operation was complicated, but mechanical: if you weren’t careful, you might get your finger caught in the RAM. If something broke, you needed a wrench. Now a stored bit takes up so little space that if it gets much smaller it will cease to be governed by Newtonian physics.

This is the real problem. Technology actually is approaching the magic point. You want to know how your laptop works. You can’t know. Even the people who made it don’t know. Apple has to call up LG or Sharp when it wants a high-density display. LG has to call Samsung when they want MLC flash storage. Samsung has to call NVIDIA when they want graphics cores. NVIDIA has to call ARM to make SoC architecture. Vertical integration is a thing of the past because no company can do it all. It took Intel five years and billions of dollars to develop just the processor your laptop runs today. The whole system is the culmination of a century of work by geniuses and specialists. Control over your hardware is the flimsiest of illusions. You only understand the snow frosting the top of the iceberg, and even then all you can do to fix it is pay for more.

But that’s a bit of an academic (and existential) appraisal of the subject. Realistically speaking, there are better and poorer ways of creating a laptop, ways that enable such a device to last for five years instead of two, or to enable upgrades that cost a few hundred rather than a thousand dollars. The new Macs are, by some standards, the worst yet made.

Even this is on its way out, though. Integration and portability are the word now, not modularity, at least for the vast majority of users. Mobiles and tablets use SoC architecture that unifies logic, graphics, sound, and other functions all under the same chip for reasons of compatibility and power savings. I’ve assembled my own PCs for years, and I expect I’ll probably assemble one or two more, but even now it’s anachronistic, at least at the consumer level. Modular and open hardware (such as it is) will continue to exist, but as before they will only funnel into more usable, closed systems.

We’ve made this surrender many times. We surrendered control of our government to representatives because it’s better to have a few (ostensibly) informed individuals whose (nominal) duty it is to govern on our behalf. We surrendered control over our cars decades ago with electronically controlled fuel injection and timings, with parts we couldn’t fix or even reach, because it improves mileage and reliability. We surrendered control over the way we interact when we decided we’d use Facebook and text messages, because it’s convenient and fun. Each time we make a little bargain: we control less and we get more. Is anyone surprised it’s happening again?

We should certainly be able to do what we want after the fact. We can impeach our representatives, tweak our timings, and use Facebook to organize anti-Facebook rallies. And we can and should run our own programs, our own operating systems, do what we will with the platform we’ve bought.

The biggest threat is not to hardware, which has in truth been beyond the comprehension of users for decades, but to what we are allowed to do with it. Apple can solder their RAM and seal it with custom screws all they want. They are only creating the medium and in this case, the medium is not the message. Their computers are more locked down than others, but we mustn’t underestimate how locked down the others already were.

More troubling is the deeper marriage we are seeing between hardware and software. How many OS X and iOS-specific functions do you think lie beneath the placid mask of the A5 processor? How long before locked bootloaders and UEFI and intelligent cables prevent you from installing a new OS or streaming from non-approved sources?

For that matter, with virtualization of services and externalization of storage, how many steps are we adding between ourselves and the things we use? Running the software we want, even if it was on hardware we don’t understand, was one of our last strongholds. And now “our” software is running on other people’s hardware, people who give it to you for free and in return we… what, exactly? We don’t question that nearly enough.

The fight is not to control the hardware. The hardware has been out of our control for a long time. Despite that, hardware today, more complex and inaccessible than ever before, is more enabling and powerful than ever before. If you want a fight, don’t fight against technological progress, which constantly moves these things ever further out of your grasp. Whether you or Apple has to replace the drive or screen in your new MacBook Pro is immaterial. Whether Apple, or Amazon, or the MPAA, can stop you from using it the way you like is not. Forget the soldered RAM; there are those who would solder you down given a chance. They are the ones to fear, and therefore the ones to fight.



Comments Off

Photo

Devin Coldewey

June 16th

Uncategorized
line
October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031