A Response to “The Final Upgrade” (PopSci October 2013)

I just read the short column in this month’s Popular Science about “The Final Upgrade,” supposedly our way out of phone and PC upgrades by way of smarter software. What I thought was going to be an insightful look into new load-minimizing techniques or ultra-efficient consumer software instead quickly degenerated into the trite “the cloud will save us all” argument.

Ioizzio argues that the end of hardware upgrades is upon us, because obviously we are offloading more and more processing demands onto the cloud. She made the comparison to 80s mainframes for me: modern hardware will be “thin client terminals” for use to log into some corporate mainframe. We will stream video games (remember the hype for OnLive?), use cloud document editors, etc. But further, we will log in via some VNC-style remote desktop to a complete computer-that-doesn’t-belong-to-us. If that’s the future, count me out.

First, the technical problems are not fixed. Americans severely lack the bandwidth necessary to stream videos satisfactorily, let alone send video game inputs up and bring graphics back down with sufficiently low latency to make a game feel good. It just isn’t the same as having a local powerhouse to serve those graphics over a single meter of copper. But let’s assume, for the sake of argument, that we’re talking about some awesome fiber-to-the-home future, which seems imminent. We have tons of bandwidth, and we can stream and upload without hassle.

The main problem I have with Iozzio’s argument is that it is precisely a service as a software substitute (SaaSS). And, as the GNU foundation so thoroughly lays out, SaaSS erodes user freedom. The cloud-VNC future of non-hardware that Iozzio describes is one step beyond the current use of cloud storage and documents. It is a whole operating system on someone else’s computer! As legal teams around the world have already learned, users have no reasonable expectation of privacy when they use someone else’s computer. This astonishingly overlooked point means that, in the future the author envisions, users will never have an expectation of privacy. Ever. Period.

With our clearer knowledge of the espionage activities of the government, and even with our old-fashioned common sense, this vision of the future doesn’t is unreasonable. It is unacceptable and irresponsible to make a technological prediction without considering legal and social implications.

So instead of this cloud-VNC nightmare, we should be hosting our own clouds, or (gasp) keeping our data locally on our devices, and using better software that works on older hardware. Users must own the means of computation, or we’ll be right back where we started, with corporate-mandated policies on what is acceptable for people to do on their computers. Further, we’ll be begging the NSA to to take a close look at our whole computing stack, taking whatever they please. Let’s not go backward after so much progress towards user empowerment.

Smoothing PLA

Just found a great tip over on Hackaday about smoothing PLA.

With ABS plastics, you can suspend the part in Acetone vapor for a while to get a shiny finish, but PLA isn’t soluble in Acetone. It turns out you can use tetrahydrofuran in the same role for PLA. Awesome!

I just ordered some on eBay.

RepRap Accessories

CAM00889I ordered some handy tools and parts in anticipation of my RepRap kit. I got a stainless 18″ metric/inch ruler, some high-temperature “Ultra Copper” silicone gasket maker, some 4″ Kapton tape, and some wrenches.

The 13 mm end-wrenches will be for tightening the structural nuts on the threadstock that makes up the Prusa Mendel frame. The 5 mm wrench, a less common size, is for adjusting the extruder (as far as I know).

I also got some awesome plastic braided wire wrap, essentially a long split tube for bundling wires. I will use it to tie cables to the frame, and possibly as a filament guide.

The gasket silicone is for potting the heater resistor in the hot end bolt, and for attaching the thermistor. It forms a mechanically strong and thermally-conductive joint between the hot parts.

The kit comes in tomorrow, along with the caliper.

Knobs: An old-school interface

Knobs

Knobs

I love thinking about and designing computer interfaces, in code and in the physical world. And I really love knobs, specifically infinite-scrolling with rotary encoders.

Pursuant to my love of interfaces and knobs, I put together a three-knob interface for my laptop, and I call it “Knobs.”

Why knobs?

I was thinking about the touch-based inputs we use so often now. I really like touch screens, but they miss out on some of the haptic satisfaction of bulkier, older-style interfaces. I love old Hi-Fi audio equipment, because of its sense of purpose and ease of operation. I wanted to experiment with a heavier, more tangible interface for my computer.

Knobs are cool because they can represent any linearly variable quantity, like volume, brightness, intensity, or tone. Thy can also represent position, speed, or menu choices. I built this project to be as open-ended as possible, because I didn’t really know what I would use it for.

Functionality

I currently have the knobs controlling my window manager. One emulates alt + tab, for switching between open windows, and one emulates ctrl + alt + arrow, for switching between workspaces. I haven’t decided what the third should do, and I haven’t yet used the shaft button for anything.

Technical implementation

Knobs is really simple: an Arduino Pro Micro knockoff is wired to three rotary encoders. It doesn’t have enough interrupts to watch all three, so instead it polls each encoder as quickly as possible.

The Arduino has a built-in USB stack, so it emulates a keyboard. It sends keypresses when the encoders move, and those are mapped to window-manager controls on my laptop.

The source code, for Arduino 1.0 or higher, is on my GitHub.

Physical Design

Most of the project as focused on the form factor of a human-computer knob interface. I decided three knobs would be useful without being too bulky. I wanted a feeling of high quality and solidity, with intuitive purpose.

Knobs without the, erm, knobs.

Knobs without the, erm, knobs.

The knobs invite the user to interact with them. They aren’t labelled, but their effects are immediate and reversible, so experimenting with them is natural. The enclosure is a solid block of black walnut, hollowed with a drill press and oiled with Tung oil. The two end faces are quarter-inch Aluminium plate, sanded, brushed, etched and then anodized. Four black hex-head bolts hold each plate on at its corners, and are easy to remove thanks to the embedded metal threads in the wood.

A single USB port is the only output on the rear plate, and the device has no lights, other sensors, or batteries. It is as simple as possible, while being as useful as three knobs can be.

Influences

Knobs was heavily inspired by Hi-Fi equipment. I also channelled the aesthetic of Monome, a new interface group legendary in the music community. I wanted the purposeful feeling of an oscilloscope, where each important function is broken-out to a top-level control, in the form of a physical sensor like a switch or knob.

I like the idea of important functions being on the top level of interaction: the user doesn’t have to dig for them. The knobs don’t have any modifier keys to change their functions, like a shift or alt key does for a keyboard. They don’t do different things based on the context of work on the computer; instead, they have a “global” scope.

The knobs take up space. They are always present on the desk, which could be good or bad, depending on how you look at it. On the one hand, they are never hidden, so they are always easy to find and use right away. When I want to switch workspaces, I turn the workspace knob. On the other hand, they take up valuable space to do something I can do with my keyboard. They are heavy and don’t travel well. Maybe that, too, is part of why I like them.

Aluminium and wood are two of my favorite materials in the world, and I love combining them. A computer interface device is an unexpected candidate for these architectural materials, and I think they bring a sense of humanism and physicality to virtual spaces. They wear well and last a lifetime, and they feel good to hold and manipulate. The knobs feel cool to the touch after they sit unused for a few minutes, then they warm up as they are used more often. The walnut is smooth and warm to the touch, and its natural grain pattern complements the precision of the Aluminium.

Utility

The Knobs aren’t all that useful to me right now, but I like them and I am glad I made them. I learned how to interface with rotary encoders–it’s actually quite simple.

Reflecting more on the action of switching between running tasks with the keyboard (for example using alt + tab), I realized that the main point of the shortcut was to keep both hands on the keyboard. Reaching for a knob is just as time-consuming as reaching to a mouse or touchpad, but the cognitive load of turning a purpose-built knobs is lower than that of locating, moving to, and clicking on a task-tray representation of an application.

The knobs don’t really save any motion, so maybe foot pedals would be better for the ultimate in efficiency. Knobs are very intuitive, though, and they can easily map to media controls, or perhaps the zoom level of a browser window. In theory, they could map to any function of the computer, including those affecting the web. I just need to think some more about what those functions might be.

The build, roughly in order

A new Prusa Mendel is on the way

I just ordered a Prusa Mendel kit from NWRepRap. They were extremely friendly and very competent, and I am excited for my new RepRap to arrive. The kit price was great, and I love the completeness of the BOM. While I was at it, I purchased a 5-lb spool of black PLA. NWRepRap hsa it for a good price, and their shipping wasn’t bad (it depends on you location and the weight of your order).

I also ordered some tools to go with it: a dial caliper (because those cheap digital ones eat batteries too quickly), some end-wrenches, and a long ruler. Calibrating the printer is crucial, and I want to get it right. I also ordered some wide Kapton tape for the print bed and some silicone and wire wrap.

I am excited to assemble my open-source printer, and I will be posting the process.

Tracking Microfluidic Droplets in ImageJ

I am researching microfluidic droplets this summer. We run aqueous fluid through a flow-focusing region in a micro-channel, which causes little drops to shear off and flow through the chip. We observe the drops with an inverted microscope and the high-speed Andor Zyla camera.

I am developing a plugin for ImageJ to track the droplets. We want to see their shape, velocity, orientation, area, and other geometric properties, but it takes a long time to track them by hand (and it’s silly, when we have computers).

The problem is that these droplets are hard to track. They have a similar brightness to the background, their borders aren’t always well-defined, and they move quickly relative to the speed of the camera. So I am brainstorming new ways to isolate the droplets from the background.

Right now, the tracker relies on various steps of removing noise, subtracting static background pixels, and binarizing the image to convert the droplets to black binary blobs on a white binary background. The tracker itself just observes these binary blobs, so the important work happens before the tracker is even involved. The way in which we convert these noisy, grainy images of real-world droplets into representative binary form directly affects the quality of the tracking results.

I have some thoughts on how to do better. Right now, I am treating each still frame in the stack as independent from the others. The filters are stateless, in that they work the same on each frame regardless of what frames came before or after in the stack (in time).

But the droplets are moving, so I should be able to use that crucial bit of detail to massively improve the tracker. When I, a human, see the following image (a photo of a droplet in a channel), I can guess where the droplet is. I expect something oval-shaped, somewhere in the center of attention. But if I look at the image without any bias, I really don’t know whether that droplet is part of the background or not.

Background Or Foreground?I am biased, in that I look for simple geometry. It’s what my vision is trained to do well. However, as soon as I see the next frame in the stack, it’s obvious what I can ignore as the background.

Two ShotsThe droplet in the center moved down a bit, but everything else stayed still. I can easily isolate it from the background with my eye, especially when it’s moving. So maybe I can get the tracker to do the same.

Using a subtraction, I can find the literal difference between the later and earlier frames shown above. This yields a “heat map” of everything that moves in the images.

SubtractionThe blue area represents the moving front edge of the drop. Unfortunately, it isn’t round, and it doesn’t look like a drop, because the subtraction removed everything except the change.

When a droplet moved between frames, most of it (by area) will seem to stay still. Only the front and rear edges will change. So this subtraction can’t be the full solution, but maybe I can use it as a place to look for drops.

Interestingly, I can do the same on the binary version of the image. Binary XORThe two left images above are the messy, thresholded representation of two of the original images (not exactly the ones above, by similar). I can make out the droplet in the center, but so much mess surrounds it that the tracker gets confused.

The image on the right is the result of XORing the two left images. Where one has a dark pixel, and the other doesn’t, the XOR is illuminated. This yields a robust hot spot where the drop is moving, and it also creates some noise from the blinking background pixels (an artifact of the thresholding).

It isn’t directly trackable, but I hope I can work it in to my system soon.

The hive mind of instant answers

New instant answer services like Siri and Google now are awesome. They let curious but otherwise busy people look up that random fact. They let people on the road communicate safely. And perhaps the most significantly a present complicated information in the form of a single answer usually larger than the related search results to make digesting the information easier.

However, I think that single answer format also poses something of an ethical dilemma. Or, at least an information security concern.

Think about it: as more and more people start using services like Siri and Google now and the inevitable clones that will follow, more and more people will rely on these single phrase answers for their research and day-to-day information. That in itself is not a problem, but with the scale comes the emergent effect of a homogeneity of knowledge. If we trust a given service to decide what the most valuable answer to our questions is, or to decide which answer we intended to receive, we implicitly agree that the service’s priorities are also our own priorities.

Of course, the counter-argument would be that the kinds of questions people ask of services like Siri and Google Now are trivial and usually have factual answers, such as what time a bus leaves, or what the weather is doing. But to that I would say that it seems inevitable with new tech like knowledge graphs and bigger Big Data that the language recognition and the corresponding answers will only get more nuanced.

When you ask Google a question, you are doing something distinctly different from searching for related articles or videos or the like. You are combining “I’m feeling lucky” with a summary algorithm, a cross-referencer, and maybe even a translator. Of course I don’t know how exactly the algorithms work, but for once that isn’t my main concern.
The algorithmic answers we trust from these machines are only as good as the data they mine. When you ask a question, who exactly is on the other end? Added to the already-difficult problem of authorship verification online is now the total lack of a source citation on the one-liner we receive for our query. How do you trust an author you can’t see? You trust the algorithm that chose to draw from that (or those) author(s).

This averaging effect on the underlying data corpus from which the algorithm draws its answers leads to the possibility of manipulating the answers people rely on. If people aren’t doing the careful research themselves, it may be less obvious if malicious data is injected in order to purposely skew certain answers.

And, if in the usual case, the answers we get from the instant services are accurate–for example, for weather information, or for stock quotes–what reason would we have to distrust the more complicated or sensitive information? I would say that it is unlikely that the average user will be investigating and verifying the answers that the service provides.

One-liner questions and answers inadvertently open the door to a sort of hive-mind effect where the information people use on a daily basis is cross-linked to the prevailing buzz on the net and extremely condensed and normalized. However, I think the less-often-discussed side of this new system is the potential for malicious actors to subtly affect the information people use on a daily basis, or for potential monied interests to skew common knowledge in a certain direction. For example, if I ask what is the best refrigerator? And Google now on replies a Frigidaire, rather than getting the results page of various tech reviews I will get a single answer. How will I know the origin of the answer? If many people rely on this form of research, the common perception will come to depend on the service provider’s choice of ad influence, statistical weights, and so on.

These services are certainly helpful for quick information, but please don’t sacrifice your willingness too dig a little deeper for things that matter.

Reverse-Engineering the Phonak Europlug

My brother uses Phonak hearing aids, and recently the patch cable he uses to listen to music deteriorated so far that only one channel was working. Unfortunately, the cables are $50 new and hard to buy. I decided to try my hand at making a new cable, using top-quality components.

The Old Cable

In my haste, I didn’t photograph the old cable. It was a blue PVC patch cable with a Y-splitter about halfway up, like many earbud cables. The splitter was suspiciously large, a fact that will be important later in this tale. I started by measuring for continuity from the TRS (tip-ring-shaft) connector to the “Europlug” connectors at the hearing aids. It seemed that the cable was completely broken, since no current would flow except in the ground return!

I thought the discontinuity might be somewhere in the cheap cabling along the way, so I cut the Europlugs off (leaving whips for soldering later) and found that the connectors were intact.

The Europlug schematic

I next checked the TRS end of the cable, and it too seemed okay. The only reason that the cable would not conduct, then, is if it had some sort of AC coupling capacitors for attenuation… I carefully cut away the cast silicone on the larger-than-normal Y splitter and indeed found an attenuator circuit inside. It was formed on a tiny two-sided PCB, but a little trace-following revealed the schematic:

The Phonak attenuator circuit.

The Phonak attenuator circuit.

The attenuator turns out to be very important. Without it, headphone audio signals commonly reach up to 2Vpp (peak-to-peak). On the other hand, the low-voltage battery in the hearing aid sets its maximum allowable voltage to around 1.3V. Considering that the ‘aid can function even when the battery is low, and can receive signals from tiny accessory “boots” that act as parasites on the main battery, it makes sense that the signal level can be very tiny. The attenuator in the cable bring the headphone signal down to the order of millivolts, protecting the hearing aid from overvoltage. Without it, even the lowest volume settings would sound horribly loud!

I used the schematic I harvested and the old connectors to build a new, better cable with fancy braided insulation for durability.

The new cable, with old connectors salvaged

The new cable, with old connectors salvaged

Files

I made a gEDA schematic of the attenuator and connectors, in case anyone needs it. Hack on!

Rethinking the Alarm Clock

I interact with my alarm clock every day. It wakes me up in the morning, and because I get up at different times each day depending on what I need to do, I end up setting the alarm time again almost every night. I have been thinking about device design and the user experience of an alarm clock, and I have some idea that may make it into a new prototype I’m building. The device I have in mind isn’t really cost-effective–It won’t compete with the bargain-basement plastic clocks pouring out of China–but I would like to explore some new design ideas.

Problems with Other Clocks

The clock I use every day is bare-bones. I like it, because it has multi-colored seven-segment displays. But the interface for setting and checking the time and alarm settings is the seemingly-standard four buttons: time and alarm buttons; and hour and minute buttons.

The interface is admirably simple, driven to extremes by the competition for lower prices. But it requires two hands to use, and to setting a time earlier than the one programmed in the clock is annoying. There is no way to go back; the user must go “around the horn” to choose a new time. Simply holding the buttons down doesn’t work, either: the clock counts forward so slowly that my finger cramps from holding the button for so long. So I end up pressing the buttons as quickly as I can, and if I miss the minute I need, it’s another fifty nine clicks to get back around. That’s annoying!

I also generally find myself frustrated with the lack of clarity as to whether the alarm is even active. The designers usually hide the alarm switch on the side or back of the clock, so if you don’t want to snooze, you’ll have to flip the flimsy switch until the evening.

My Goals

I would like to create the simplest and fastest interface I can. I have many thoughts about how to do so, but some ideas are central. Here’s the breakdown:

  1. Choosing a time is a continuous action. Time (at least as clocks portray it) is linear, and the user needs to be able to find a time in the continuum quickly and easily. For this task, I have in mind a rotary encoder, like a modern volume knob. The detents should be gentle and smooth, and fairly close together so going through sixty click doesn’t take too much effort.
  2. Clocks are important at night, so the screen needs to light up. I don’t want to press anything to see what time it is, and I surely don’t want to turn on a lamp to see the time. I want to know, still in the dark, whether the alarm is on, what time it is now, and whether the clock thinks it is currently morning or night.
  3. The clock needs to survive a small power outage. If I have just fallen asleep, a power outage is not going to wake me up. And if the clock doesn’t wake me, nothing will. So I need the clock to remember the time for about twelve hours when it’s off wall power, and still sound the alarm when I have to get up. It doesn’t have to show the time if that takes too much energy, but it does have to wake me up.
  4. The alarm switch and buttons need to feel good. I want a solid click, not a wimpy squish.
  5. The clock needs to look nice. Ugly appliances are garbage.