Permacomputing
Situating the idea of permaculture within the context of computing.
Bookmarks
- KDE Eco Handbook. Well researched article explaining how software can be part of the solution, but now is part of the problem.
- Permacomputing. The original introduction to the concept.
- XXIIVV - Permacomputing. Another exploration of the concept that especially looks at the permanence of software.
Log Entries
29.10.2024. Personal Tools
Paramytha, Cyprus
Recently, I finished up Donald Norman's An Invisible Computer. It's a fantastic book, probably one of my favorite books, and it starts off with a powerful quote:
The personal computer is perhaps the most frustrating technology ever. The computer should be thought of as infrastructure. It should be quiet, invisible, unobtrusive, but it is too visible, too demanding. It controls our destiny. Its complexities and frustrations are largely due to the attempt to cram far too many functions into a single box that sits on the desktop. The business model of the computer industry is structured in such a way that it must produce new products every six to twelve months, products that are faster, more powerful, and with more features than the current ones.
As much as I like it, however, one concept I disagree with is what he concludes and also titles the book, that technology should be completely invisible. By invisible, he means that technology should blend so seamlessly into our everyday life that we do not notice it is there.
[.. talking about the goal of technology] The end result, hiding the computer, hiding the technology, so that it disappears from sight, disappears from consciousness, letting us concentrate upon our activities, upon learning, doing our jobs, and enjoying ourselves.
Sounds great in theory, in practice of course what we find is that companies make something that is easy to use, and then do not necessarily act in the users' best interest, but the user is stuck with what the company provided and is neither empowered to seek out other options nor fix it. I think this is an anti-pattern, and I talk about this in depth in my essay, The Curse of Convenience.
However, there is also another section that I found particularly interesting and shines a light on where I think technology should go, which is Norman's idea about what makes a good tool. Norman has this to say:
Good tools are always pleasurable ones, ones that the owners take pride in owning, in caring for, and in using. In the good old days of mechanical devices, a craftperson's tools had these properties. They were crafted with care, owned and used with pride. Often, the tools were passed down from generation to generation. Each new tool benefited from a tradition of experience with the previous ones so, through the years, there was steady improvement.
I do not feel like modern phones and computers are like this kind of tool. A good retro camera is something we learn inside out, with its quirks and unique abilities, and becomes part of our hobby and personality. This is not so much the case with a modern iPhone. Modern technology removes as much personalization as possible and makes it hard to repair for the sake of convenience, looks and ease-of-use. That means that you do not put effort into learning your tool nor personalizing it, and so as a result do not appreciate it as much. Instead of customizing and learning about your unique device, you buy a new one, that acts just like the old one. Devices become impersonal and invisible.
In a recent interview, Norman laments the fact that his all time best-seller, Design of Everyday Things, did not cover that tools should be designed to be repairable too. To me, repairability is in opposition to his idea of technology becoming invisible. At the same time, I think repairability and customization goes hand in hand with his idea that you should feel pride in owning a tool, and that that is what makes a good tool. A device that you tweak and make truly your own, you will care for more and want to repair as well. As I replace parts of my Thinkpad and change the way it looks and feels, I find it becomes more personal to me.
- Marc
07.09.2024. Great Software is Simple on Many Planes of Abstraction
Bogotá, Colombia
There is a dichotomy in software development, high-level and low-level software. This is particularly true in programming languages, where you have low-level languages that give you more control, but require more understanding of how computers work, and high-level languages that allow you to express your ideas more simply and allow you to not think about low-level details.
High-level software is called that because it operates at higher-levels of abstractions. This means that it obscures the lower level machinery of what happens, so that we can focus on the task at hand. For example, we would not want to think about how a web browser forms a TCP/IP connection, communicates with a DNS server etc. when opening a web page; at least we do not want to up until the point where we have some error.
High-level software unlocks the ability for us to perform more advanced tasks quicker with less mental overhead. It can also unlock a great deal of improved security, as it can restrict certain operations from the user and better adapt to their needs, which even if it removes some liberty is still desirable.
When we only focus on building a good high-level experience, it comes at the expense of low level control that becomes inaccessible or too complex for regular users. This invites us to be inefficient, construct false understandings, and without a deep understanding of our systems, we can not meaningfully fix the system nor optimize it. It invites us to outsource this control to experts, who are often restricted to building general solutions for many use-cases. Poor understanding often leads to rebuilding the same solutions over-and-over again, with the solutions becoming even more complex and inefficient each time. When only experts understand systems, it centralizes knowledge and power and makes the general population unaware of issues. I explored the democratic problems it leads to in my essay The Curse of Convenience.
Low-level tools gives us greater control and greater freedom. Understanding the lower-level units makes it easier for us to understand how everything fits together, and gives us the power to make the changes we wish. However, using only low-level tools makes it more difficult to cleanly express ideas and it can be a frustrating experience. It sometimes requires digging through manuals and the time it takes to accomplish tasks becomes much longer. It is also easier to make mistakes.
This dilema leads me to solutions that attempt to be simple on many planes of abstractions. In programming languages such as Go and OCaml, the high-level semantics are simple, but a user can still also understand the lower-level details of what happens, which makes it possible to operate at a lower-level when necessary without being an expert. In software, Unix utilities find the balance of being simple and allowing users to express high-level ideas. Older motorcycles are easy to operate, and also easy to fix when necessary.
- Marc
07.09.2024. Communal Computing
Bogotá, Colombia
One of the most beautiful ideas behind the original Unix, that I think unfortunately has gotten lost and is underrated, was the idea of a form of collective computing. People would gather as a group and collectively build their tools specific to what their community needs. The way Dennis Ritchie described it:
What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.
Using a collection of simple tools, users would then be able to combine these together to build the tools specific to their needs on time-shared machines.
Another obvious advantage to collectively owned computers is that you retain ownership from the bigger companies, while at the same time can unlock better optimization where these computers can live in geographically advantageous regions. For example, Solar Protocol directs users to whichever server has the most sunlight.
There are a lot of advantages to empowering users to fix issues themselves rather than someone fixing their problems for them. I wrote about it extensively in my essay The Curse of Convenience. I also see with the new LLM models a resurgence of this idea in Maggie Appleton's essay about home-cooked software. Personally, I am skeptical that LLM's will enable this revolution, but I think her essay is still worth a read!
Today, there is a communal computing system that exist, it is the SDF. It is quite old and has been around since the 1987, and it definitely marketed towards a technical audience. It hosts a set of collective computers that any member can use for any purpose (within reason). With it people have set up a Lemmy instance and Mastodon instance. You also get a free email account with it. It also comes with a shell you can SSH into and do any kind of programming that you want.
Personally, I use SDF to host my notes with git. Doing that was as
simple as ssh user@tty.sdf.org -t mkdir notes && cd notes && git init
. Once done, I am able to access these notes from my phone, my
laptop or wherever I happen to be. To clone it locally, I just run git clone user@tty.sdf.org:~/notes
. I also hang out at their Lemmy instance.
- Marc
04.09.2024. Human Readable File Formats
Bogotá, Colombia
I have recently (ish) become interested in file formats. In particular, file formats that are human readable and human writable.
These formats have quite a few advantages:
- They are (obviously) modifiable by hand
- They are easy to backup
- They are easy to maintain
- They are (often) easy to build software for.
But they also come with challenges:
- Human-writable means that data integrity is not guaranteed
- They are inefficient
To me the biggest beauty of these file formats is that they can outlive the software that created them. Even if I am on a foreign computer, without internet, hit with amnesia, I can still make sense of and modify these formats.
Software in some way or another takes data and outputs data, that's what a computer is meant to do. I think it is worth thinking about how we can make sure that the data generated outlives the software that made it, inspired by Permacomputing.
I kicked off a thread on Mastodon to see what kinds of human-readable data formats people know of. I am excited to see what people share.
- Marc