IoT Working Bench – Where the ideas materialize.

What is so amazing about IoT ?
You can get started easily with very little budget to work with microprocessors, single-board-computers and all kinds of electronics, like sensors and more. For the standard kits we discuss here there, lots of online documentation, books and web-sites are available, even interested people with very little IT or electronics knowledge or students at secondary schools can get some hands-on with easy projects.

With a simple workbench, you can do prototyping and evaluate before you even consider going in series, or maybe just build a dedicated one-off device.

Microprocessor and SBC

ESP32

The ESP32 SoC (System on Chip) microcontroller by Espressif is the tool of choice aiming for a small footprint in terms of size (the chip itself measures 7x7mm), power consumption and price. It supports a range of peripherals, I2C, UART, SPI, I2S, PWM, CAN 2.0, ADC, DAC. Wifi 802.11, Bluetooth 4.2 and BLE are already onboard.

The benefits come with limitations though, the chip is operating at 240Mhz and the memory counting in KiB (320 KiB RAM and 448 KIB ROM). Memory consumption has to be designed carefully and a conservative approach towards running the device in various live and sleep modes, it can consume as little as 2.5µA (hibernation) but can draw as well 800mA when everything is running in full swing with Wifi and Bluetooth enabled. The ESP32 and its variants teach you proper IoT design. You can buy the ESP32 as NodeMCU Development Board for less than Euro 10,-.

Arduino

The Arduino history goes back to 2005 when it was initially released by the Interaction Design Institute Ivrea (Italy) as electronics platform for students. Released into the wild as open source hardware for over 15 years, there is a huge user community, plenty of documentation and projects ready to replicate.

The Arduino, even somewhat similar to the ESP32 (Arduino being not as powerful, slower and less memory than the ESP32), is more beginner friendly. The coding is done with sketches (C language) uploaded to the device via USB, logic similar to Processing.

If your project has anything to do with image, video or sound capturing, the Arduino (and the ESP32) is not the right choice, choose the Raspberry Pi as the minimum platform.

The Arduino has a price tag between Euro 10,- to 50,- depending on the manufacturer and specs. For education purpose you find it packaged together with sensors and shields for basic projects.

Raspberry Pi

The Raspberry Pi (introduced 2012) is the tool of choice if you need a more powerful device that runs an OS, can be connected to a screen, supports USB devices, provides more memory and CPU power and easy-to-code features. Connected to a screen (2x HDMI) it can serve as a simple desktop replacement to surf the web, watch movies and do office jobs with LibreOffice for regular user profile.

The current Raspberry Pi 4 ranges between Euro 50,- to 100,- (inclusive of casing and power supply).

Edge or ML Devices

These devices are similar to the Raspberry Pi platform in terms of OS, connectivity, GPIO’s etc, but leaning more towards serious data processing ML inference at the edge.

NVIDIA Jetson

NVDIA launched the embedded computing board in 2014 and has released several new versions since then. The current one is the Nano 2GB Kit, purchase it for less than Euro 70,-. Together with all the free documentation, courses, tutorials this is a small powerhouse which can run parallel neural networks. With the Jetpack SDK it supports CUDA, cuDNN, TensorRT, Deepstream, OpenCV and more. How much cheaper can you make AI accessible on a local device? More info at NVDIA.

Coral Dev Board

The single-board computer to perform high-speed ML inferencing. The local AI prototyping toolkit was launched in 2016 by Google and costs less than Euro 150,-. More info at coral.ai.

Sensors

There is a myriad of sensors, add-ons, shields and breakouts for near endless prototyping ideas. Here are a few common sensors to give a budget indication.

Note (1): There is quite a price span between buying these sensors/shields locally (Germany) and from the source (China), it can be significantly cheaper to order it from the Chinese reseller shops (though it might takes weeks to receive the goods, and worse you might spend time to collect if from the customs office).

Note (2): Look at the specs of the sensors/shields you purchase and check the power consumption (inclusive of low power or sleep modes) and the accuracy.

GY-68 BMP180Air pressure and temperature.
SHT30Temperature and relative humidity.
SDS011 Dust Sensor (PM2.5, PM10)
SCD30 CO2
GPS Geo positioning using GPS, GLONASS, Galileo
GY-271 Compass
MPU-6050Gyroscope, acceleration
HC-SR04Ultrasonic sensor
The authors IoT Working Bench

Some devices on the above image: Raspberry Pi4B, Arduino (Mega, Nano), Orange Pi, Google Coral Dev Board, NVIDIA Jetson Nano, ESP32, plus a few sensors/add-on’s like Lidar, LoraWan, GPS, SDS30 (Co2), BMP 180 (Temp, Pressure), PMSA0031 (dust particles PM2.5, PM10), microstepper motor shield.

What else do we need ?
Innovative ideas, curiosity to play, experiment, willingness to fail and succeed with all kinds of projects.
A 3D printer comes in handy to print casings or other mechanical parts.

Next Steps
The step from prototyping in the lab to the mass-production of an actual device is huge, though possible with the respective funding at hand. It makes a big difference to hand-produce one or a few devices that you have full control over and manufacturing, shipping and supporting 10.000’s devices as a product. You have to cover all kinds of certifications (e.g. CE for Europe) and considerations to design and produce the device by a third party (EMS).

Another aspect is the distribution of IoT devices on scale. A device operating in a closed environment. e.g. consumer appliances that solely communicate locally does not require a server backend. Certainly devices deployed at large, e.g. a fleet management system or different type of devices, it is recommended to use one of the IoT platforms in the cloud or locally (AWS, Microsoft, Particle, IBM, Oracle, OpenRemote, and others).

Stay tuned..

Taming the beast – Some GPU benchmarking

Resuming with the setup and benchmarking of the RTX 3080TI. After the initial basic 3D rendering FPS-tests, time to get the hands dirty with some ML tests. Before trying to benchmark the GPU, we need to get the required Tensorflow packages and NVDIA toolkits up and running under Windows.

For this setup we assume we have Windows 10 and we will use PyCharm as our Python IDE.

The required NVIDIA basic ingredients :

  1. Download and install the latest driver for the GPU from the NVDIA download. The CUDA toolkit requires a minimum driver version (more info).
  2. Download and install the CUDA toolkit (link) (at the time of this post, version 11.6)
  3. Download and install the cuDNN library (link). Beware, there is a dependency between the versions of cuDNN and CUDA. I was not able to make the latest version of both (cuDNN 8.3.1 and CUDA 11.6) to work for our Tensorflow setup.
    Download the latest 8.1.x version of cuDNN instead.

Following the official installation guide (adding the insghts from some blogs and forums), we still have to make some manual changes to our system.

  • Copy relevant library files from the cuDNN zipfile content to the respective CUDA path folders.
  • Ensure the relevant path are setup in the Windows system settings for environment variables.

With this we can start PyCharm, create a project and embedd the Tensorflow packages. If you choose a different method, make sure you use virtual environments, the packages sum up to up to 2GB coming with their potential dependency problems interfering with other projects in case you share packages.

Lets pip-install the Tensorflow-GPU package and check if the GPU is found.

import tensorflow as tf

tf.config.list_physical_devices("GPU")

if tf.test.gpu_device_name():
    print(tf.test.gpu_device_name())
else:
   print("No GPU.")

gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
  details = tf.config.experimental.get_device_details(gpu_devices[0])
  print(details)

We also can install some of the basic ML packages and verify them.

import sys

import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf

print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")

The final step would be to perfrom some kind of ML benchmark performed by the GPU. Doing a quick search I found only one easy applicable solution at the website ai-benchmark.com. Unfortunately, the page, created by some people from the CV Lab at the ETH Zurich, is no longer maintained (no current cards in the rating list). We still can use the package and do a basic scoring.

Comparing, by running the same test, the RTX 3080TI with the CPU AMD Ryzen 1700 (left side in the screenshot).

This provides some order-of-magnitude but hard to say if this a proper proper approach.

More real hands-on exercises will come up soon. Stay tuned..

Unleashing the beast – Paving the way to the Omniverse

Looking back at 30 years of experiencing firsthand the (graphics) hardware (r)evolution, it is thrilling to enjoy such a high level of realtime rendering quality, high resolution and performance, available today. My career literally started off in the era of 256KB VGA graphic adapters (operating with bit block transfer), moving from 320×240 and 640×640 to SVGA 800×600. Some of you remember brand-names like S3, ATI, Diamond and others, 25 years back when a 80486DX motherboard had 6x ISA expansion slots for a variety of sound/network/xyz adapters (now down to a single PCIe slot for the GPU).
Today the battle is fought between AMD and NVDIA. The evolution never stopped, I tagged along investing in newer display adapters over the years. Since the late 2000s I stick to NVIDA starting with GeForce 200 Series (GTX 260) and GeForce 10 Series (GTX 1060) and now getting my hands on a real graphics powerhouse.

I managed to purchase a RTX 3080TI. I skip the discussion of the current problems of the industry to produce and ship sufficient parts and the attached price development.
Released in June 2021, running on the Ampere microarchitecture, supporting DirectX 12 Ultimate, 12 GB of GDDR6X memory, using 10240 cores, theoretical pixel rate of 186 GPixel/s and 34 TFLOPS FP32 (complete specs here and here), this card is quite a power-house, only the 3090 being more powerful in this line (3090TI announced in Jan 22, but both unobtainable and most likely absurdly expensive).
Comparing the 3080TI (unfairly) with my 13-year-old GTX 260, released in 2008, reveals the order-of-magnitude in Moore’s law. Running only 896 MB memory, 192 cores, 16 GPixel/s, 0.48 TFLOPS FP32. If this makes any sense to compare, the RTX 3080TI would outperform the GTX 260 by ~3000% (link).
Fun fact, architecture name line-up since 2008: Tesla, Fermi, Kepler, Maxwell, Pascal, Volta, Turing, Ampere.
The RTX 30 series is targeting the high-end gaming consumer market, but its specs are close enough to the GPU workstations and data centre GPU’s, to serve personal research usage at home. Unfortunately, the RTX 30 series is still very much in demand by the everlasting Bitcoin and Etherum mining ratrace, LHR was supposed to defer the usage for mining purposes, though it seems the miner community found ways to bypass the protection (source).

After some further upgrades, inclusive of power supply update (850W to be on the safe side) and more SSD and M2 disk space, the rig is ready for a first performance test using 3DMark running Time Spy.
(Top Score in the hall of fame with a single GPU is 28473 at the time of writing)

I am setting up some test scenarios with Tensorflow to run on the GPU. Will discuss this in another post.

Eventually all up and running, I managed to do my first steps in the Omniverse, get to know the concept, the components and plugins. There is plenty to learn and experiment with, all for free.

NVDIA Omniverse Launcher

Will keep you posted about my adventures in the Omniverse. Especially interested in the context of Digital Twins. Stay tuned..

Thoughts about the Metaverse

Metaverse is increasingly trending since Mark Zuckerberg announced (Oct 28th 2021) both the rebranding of Facebook to Meta and the next big thing, the “Metaverse”

As much as I enjoy seeing technology maturing, being democratized and becoming accessible, I also want to stay realistic at the same time. Some reflections about the current hype or the next evolutionary step in human interconnectedness.

Photo by Lucrezia Carnelos on Unsplash
  • The Metaverse emphasises on VR and AR as medium to immerse yourself. VR has seen several waves of adoption since 1970, growing from research lab exclusive use to a mass consumer product. But until today, the general adoption has not grown significantly outside the gaming and simulation niche.
  • While several expensive high-end headsets have been released and announced to enterprise customers (Varjo, Pimax, XTAL,..) there is not much in the consumer space, the Quest 2 was released in 2020 (overview). Though everyone suddenly is working on something (Apple,..). If the Metaverse is the next internet accessible by everyone, we need to have devices as cheap as mobile phones. And NO, Google Cardboard is not an option.
    AR has still long way to achieve mixed reality with seamless embedded information. In 2020 AR disappeared from the Gartner hypecycle in 2020, even predicting enterprise adoption in 2021 (didnt happen?).
  • The human bioware is not being updated. Newer VR devices are getting better, more lightweight, higher resolution, less latency etc., but VR fatigue and VR sickness are still an issue. Though you can get used to it but it still will affect the adoption. You choose the wrong environment or platform to get started into VR and it spoils your first experience, you might leave for good. I know few people being “in VR” for more than 1 hour regularly.
  • Believing this is the next step in the evolution, why should we solely rely on the company META, their potential influence on behaviour and opinion will grow further. Right now, the industry should discuss standards for seamless interopability, security and data exchange, ensuring the Metaverse will not become a separate, propietary internet, but an accessible communication and sharing platform, like the internet itself in its beginning. If we would had a proprietary approach in the 1990s, HTML would not be readable today, rather a binary blob to open in the browser, open source might not be as widespread as we see it today. The Metaverse must be open, no matter what hardware or platform is used to access it.
  • META has not yet released Horizon Home, the video material we see is conceptual work and visions (‘Not actual images. Images are strictly for illustrative purpose only.’), solely the Horizon Workrooms are available as beta (at the time of writing this post), and only compatible with the Quest 2 (don’t even works with Rift S). You can use flat screen access though, which makes little sense to me. The Quest 2 will not be able to render the illustrative concepts, except could stream hig-end rendered content.
  • The same time NVIDIA comes with their take of the Metaverse toolset, Omniverse, but with existing products and plugins and a tangible roadmap.

Conclusion:

  • Lets’s stay excited, but realistic. Embrace innovative ideas to come.
  • Ensure it will be the Open Metaverse.
  • Do good and avoid evil. Not implementing the dystopian future depicted in the referenced literature (Snow Crash and others)
  • I am eager to try, experiment and pilot. Especially in the enterprise context, there are use-cases for Digital Twin, Simulation and Collaboration which make sense and will benefit.

Recommended reading:

Google Trends

Thin Client Revival for Generated Art

Part 1 – Hardware

I am experimenting with generated art once in a while for a couple of years now. It allows me to cross the barrier between coding business systems and the world of art, literally creating software that serves absolutely no sincere business value but creating artistic enjoyment. Using the Processing environment (/library/programming language) it is amazing what fantastic visuals you can produce with little code. Note, Processing runs in its 20th year now, long time before we got into the current hype of AI generated art using GAN‘s (Generative Adversarial Network) etc and people making money with NFT (Non-Fungible Token). To be precise, Processing is more a tool for procedural art, good old algorithm creating visuals spiced up with randomness or picking up external actors (e.g. webcam). Today I wont discuss NFT’s or if it makes sense to buy a JPG file for millions of dollars, nor will I talk about GAN art based on deep learning, like Style transfer and similar (another post will cover that).

How to make generated art accesible to an audience outside the browser? With traditional means we would print the art piece, frame it, hang to the wall. This will limit us to static pieces, but we aim for the creation process and animated pieces as well. I started to work on a setup that runs as an art installation using screens and projectors, people in a public space can observe and witness the process of a piece being created or interact with it. I like the uniqueness of each visual using some kind of randomness as parameter. Whatever you see will disappear forever once the screen moves on (provided no screenshot or print created), the exact same thing you will not see again, though very similar creations coming out of the same alogorithm.

Lets look at the hardware. How to do this with little money ? We need a CPU, an OS, a screen and a stand.

Thin Client

Lets revive thin client hardware that you find for a few dollars on Ebay, usually devices which spent their previous life in an ATM, POS or behind a Checkin-Counter at an airport. Once retired after a few years this kind of equipment gets recycled or find its way into the electronics second-hand market (and hopefully not in landfills or recycle yards in Africa). Using Linux as OS we can use most thin clients built after 2010 with 64bit architecture (32bit no longer supported by Debian based systems), with 1 or 2GB RAM and at least 8GB diskspace. Since we run some graphics here we need a least a decent performance. I found the Fujitsu Futro S920, launched around 2013 with the AMD G-Serie GX-415GA 1.5Ghz Quad-Core CPU, 4GB RAM DDR3 and AMD Radeon™ HD 8330E as graphics adapter, which even supports OpenGL 4.1. All for Euro 29,- inclusive the power adapter. Energy consumption around 10 Watts. Replace the 2GB mSATA drive against a 16 or 32GB for another Euro 20,-.

One could argue, why not using a Raspberry PI ? With a proper casing and power adapter I would reach almost Euro 100,-.

Fujitsu FUTRO S920

Linux OS

Debian based OS are my choice. Using the Lubuntu distro we use a small memory footprint and decent diskspace requirements.

Screen and Stand

For the screen I sourced 40″ screens, grade B returns for roughly Euro 100,-, another way to keep this project sustainable by giving electronic equipment a second life. Now comes the handicraft challenge, building the TV stand. I prefer a portrait setup, a professional stand is easily Euro 200,-. Some iron square tubes, basic welding knowledge and some paint do the job. Material spent per stand about Euro 40,-.

This could even backup as super-low budget FIDS screen setup.

I managed to build the whole setup for less than Euro 200,-. Now time to get it ready for public display.

Final Setup (on display piece’sandstorm’ transformed version by the author, original by Sayama, CC BY-NC-SA 3.0)

A small desktop version made from scrap metall for a 22″ screen

In the upcoming part 2 I will talk about the software setup of the installation as well share some insights about processing.

Stay tuned..

Bookshelf: AI 2041

Another recommended book for the Holiday break. I came across this title listening to the Nvidia Podcast (which I also highly recommend). How will artificial intelligence change the world over the next two decades ? In 10 stories Kai-Fu Lee explores the future with a blend of science and fiction, making it more accessible to non-tech readers. CO-authored by Chen Qiufan who created the fictious parts. The book was only released in last September (not yet available in German language). Every chapter brings up complex AI topics and hotly debated issues, ranfing from Deep Learning, VR, Self-Driving Cars to Quantum Computing. The non-fiction review of AI concepts analyses and describes how technology works. It reminds me reading books of Isaac Asimov 30 years back.

If you have read ealier books of Kai-Fu, like ‘AI Superpowers: China, Silicon Valley, and the New World Order’ or ‘My Journey into AI..’, this is my recommendation for you.

Get your copy from your favourite book dealer or online. Check out the book website here.

#RetroTech; 80s Home Computer again

I have fond memories of my first steps into computing in the 1980s, when home computing took living and study rooms by storm. For the first time, computing became widely accessible and affordable for everyone. I have only one original device at hand, so we will explore alternative retro options to go down the memory lane and also visit some of the other home computing platforms. The retro craze goes through various technology trends, people start to value music played by HIFI LP player and pictures taken by analog photography equipment again, others collect old computing equipment and video games consoles. The market reacts to this demand and you can re-buy the old technology again (usually packing emulators on modern chipsets into the old casings), like Sony was relaunching the PS1, Nintendo the NES or Atari the 2600 console. Prices for authentic old equipment are raising too (recommended NY Times article). In this post we will have a look at the Commodore C64.

Relaunched Commodore C64 in original case

First things first, you do not need to buy any equipment for a brief visit to the home computing past, all can be done in the browser or with emulation tools on any regular notebook or Raspberry Pi. The Commodore C64, my first own computer in 1984, I sold 1991 to finance my first IBM comp. PC. But with all the nostalgic memories attached to it, I bought a retro set from Retro Games Ltd., for roughly Euro 100,- (see above image), just for the sake of the physical look and feel of it (Note, no Commodore logo or trademark used, which was sold and passed on multiple times until today). You could achieve the same by installing RetroPie, which can almost any home computing and game console of the 80s and 90s.

The Sinclair ZX81

Before looking at the C64, a quick look at the Sinclair ZX81, which I temporary used (borrowed from a schoolmate) for about a year and to do my first computing explorations. This device was released in 1981 by Sinclair Research, a very basic device coming with 1KB (!!!) memory, a Z80 CPU at 3.25Mhz, running Sinclair Basic and supporting only a 24 x 36 character monochrome screen resolution (using a regular TV set). Everything-included-in-the-box and the user input was nothing but a pressure-sensitive membrane keyboard. An absolute nightmare for any serious typing, not to say development, but it was the only thing at hand.

Image by Evan-Amos – CC BY-SA 3.0

It did support an external add-on 64KB memory adapter, a cashier-style small printer and the only way to load and store programs was on regular audio tapes at 250bps. If you are keen to give it a spin, drop by this website.

3D Monster Maze by Malcolm Evans in 1981

There was no way to compile applications, so all the commercial tools and games came automatically as open source.

ZX81 Basic Source

The Commodore C64

The famous blue launch screen and the command to start the first app on the disk

The Commodore 64 (aka C64, CBM 64) was definitely THE home computing device of the 1980s. By far the biggest number sold compared to similar devices in the market.

Several extensions and additional hardware made the device quite universal, even allowing non-gaming activities like text processing.

A few software Highlights

Microsoft Multiplan

Believe it or not, the great-grandfather of Excel was released in 1982 by Microsoft itself. Very painstaking to use, absolutely the worst possible UX.

Multiplan on the C64
Wikipedia: Multiplan
Data Becker

Once famous German publisher Data Becker had a series of office applications like Textomat, Datamat and other xyz-mat.

Source: c-64.online.com

Infamous also their books about any C64-related content, like programming and applications of all kind.

Cover der 3. überarbeiteten Auflage 1985
Source: c64-wiki.de
GeOS Commodore C64

Launched in 1986 (One year after Microsoft introduced Windows 1.0) Berkeley Softworks released GEOS (Graphic Environment Operating System). Don’t forget, this is a graphical OS on a 1Mhz 64kB 6502 processor! I specifically bought a mouse to use it. Fun facts: Nokia used it for their Communicator Series before switching to EPOC. Plus, the source code was reverse-engineered and made publicly available on Github.

GEOS for the Commodore 64
Wikipedia> GeOS
Sublogic Flight Simulator II

Anyone remembers the Flight Simulator 1 by Sublogic released in 1979 ? State-of-the-art at that time, looking at the hardware inside an Apple IIe, but a terrible flying experience in a wireframe landscape,.

Wikipedia: FS1 Flight Simulator

The sequel Flight Simulator II came with major improvements, colors and real-world sceneries. What a quantum leap that kept me flying for hours. Dont forget to look the glasses of someone living in the 80’s, if you compare this to the latest MS Flight Simulator, it looks like a joke.

Wikipedia: Flight Simulator II (Sublogic)
Wikipedia: Flight Simulator II (Sublogic)

Other Home Computing Devices from the 80s

Many other home computing devices tried to conquer homes in the 80’s, most of them not even remotely as successful as Commodore.

Amstrad CPC 464, with CTM644 colour monitor
Wikipedia: Amstrad CPC
ZXSpectrum48k.jpg
Wikipedia: Sinclair Spectrum
Atari 1040STf.jpg
Wikipedia: Atari ST
Apple IIe.jpg
Wikipedia: Apple IIe

Conclusion

There is quite some excitement about old technology, mostly for sentimental reasons. It allows us to have a little time travel trip in the past. Sadly to say it won’t keep you entertained very long, the memories feel better than experiencing it again.

#RetroTech; The ZIP Drive

Another tech memorabilia from the 1990’s hidden away in a box for 25 year to be recovered during the attic exploration, the infamous iomega zip drive 100.

Iomega 100 ZIP Drive

This was certainly a smart innovation in the early 90’s when the predominant (transportable) media was the 3.5″ disk with 1.4MB. Iomega came up with this removable 100MB storage device using a similar form factor like a disk, but offering 70 times more disk space. Take note, at that time the average hard disk space was around 500MB, so 100MB were a decent backup option. The drive was not cheap with a price around U$ 200,- and single disks roughly at U$ 20,-. Various types were offered, supporting IDE, SCSI, USB, Firewire connections. Still the device was not as scuccessful as expected, it had to compete with the (writable) CD-ROM and CD-RW, it faded away in the early 2000’s. Iomega does not exist any longer, the company was acquired by EMC in 2008.

The above device was recognized by Windows 10 and the 20 years old backup files could still be read.

Some other similar devices were introduced during the same decade, all eventually disappeared: Jaz Drive, EZ 135 Drive, Super Disk and a few more. All sharing the same faith and leaving you in trouble if you trusted them for long term archive purpose.

Usual office desk sight with storage boxes for disks.

This is a common theme and “retro” problem that we look at here, starting in the last episode with the 3.5″ disk, a few more similar cases I will discuss in upcoming posts. We are now roughly 35 years into main stream office and home computing and we already facing challenges to persist data more than a few years.
Book-printing was invented by Gutenberg in the 15th century, there are still books around from the medival times and we still access the data, aka. read the text. The comparison can be challenged, not feasible to store today’s data volume on paper.
Fun fact: There are some tools and libraries that support creating paper-based backups, though volume-limited, this backup will survive dooms-day and any EMP, as long the paper does not catch fire and is laminated to protect against humidity. Give paperback a try, it even supports key encryption.

Main problems with old storage media and types:

  • File Format
    The format certain type of data is stored on any medium (no matter if magnetic tape or BlueRay or cloud storage) might not be supported any longer after a few years because the format is e.g. propietary or outdated, like the MS Access 2.0 format from the last post.
  • Storage Media Type
    Propietary devices from decades ago to read the respective media, are not built any longer, not supported by current OS or just do not function any more.
  • Media Preservation
    Depending on the media type, magnetic, optical, flash-memory (semi-conductor), the data can survive a more or less long time before it starts to degrade and become corrrupted or unreadable.

Stay tuned for more retro tech explorations..

#RetroTech; Rewind 35 years with the 3.5″ disk

A recent visit to our attic during the xmas break revealed a number of technology artefacts from the past. Holding these items in hands you will realize how long you already have been working in IT. Let me share some of the findings with you, like these installer disks (3.5″) sitting in a box for almost 20+ years. Surprisingly the majority of these disks, kept in a dry box, still can be read without problem.

You noticed when 3.5″ disks faded away ? At some point the drives were no longer built-in notebooks (same already happend to CD/DVD-ROM drives today) and eventually disappeared from desktop PC’s too, maybe with the end of the Windows 95 start-disk. In the 1980’s the 3.5″ disk was launched as replacement of the infamous 5.25″ floppy disk. While the initial SD version (early 80’s) only offered 360kB, we could store 1.4MB with the HD version towards 1990. Take note, a 3min MP3 file is roughly 4 MB in size. It was the main media to store and transport any kind of data. Only by 2010 Sony stopped producing them, now in 2021 the disk is extinct.

Some of the above highlights:

  • MS DOS 5 and 6: Release 5, first version supporting 3.5″ disks, released in 1991. The same year I bought my very first (own) IBM compatible PC. Release 6 came in 1993 and eventually 6.22 was the last official release in 1994. (Wikipedia link)
  • MS Windows 95
    Released in 1995, it merged DOS and Windows 3.1 into one OS. The first 9x release with the distinct Windows look that persists until today. Slowly stepping into the 32bit era, unfortunately it was not really stable, crashed frequently and slowed down over time (my most prominent memories at least). I remember the plug’n-play feature which was not so plug’n-play as proclaimed and spending endless hours finding and fiddling with obscure drivers for hardware. (Wikipedia link)
    That’s 25 years ago, you remember the commercials with the Rolling Stones song “Start me up” and the “Where do you want to go today ?” slogan ? Fun fact: Bill Gates paid something like 14 million dollars to Rolling Stones.
  • MS Visual C++
    You notice there is no release number ? Right this is the initial (“visual”) release 1.0 in 1993 running under 16bit Windows 3.0. My first steps with this programming language. I remember how troublesome it was to create even basic looking application gui’s. (Wikipedia link)
  • SUSE Linux 7.2
    Five years after the initial release 4.2, the version came out in 2001. The first Linux I installed on my own PC, until then I used Linux solely at University and work.
  • 3D Pool by Aardvark
    This 1989 game came with my first PC set, a 3D pool simulation. Quite amazing 3D rendering on a 256kB PC with a simple S3 VGA adapter supporting 16/256 colors. Experience it here.

Using this USB disk drive I was able to retrieve my digital sourcecode memories. You get these drives for about Euro 30,- . If you look for a 5.25″ solution you have to ressort to the used stuff on the usual selling platforms, plus you require a desktop that still supports IDE.

USB 3.5″ disk drive
Nerve-racking transfer speed

It took only a few disks to stumble upon a time traveller, the AntiCMOS.A virus from 1994. Survived on the disk for 25 years and being kicked out by Windows 10.

Some sourcecode retrieved from old disks, like these memories of Z80 assembler code. Can you be any closer to the CPU than this ?

Z80 Sourcecode

Extract of a Turbo Pascal application that manipulated the graphics card directly using Assembler.
Supposedly there was a way to brick or burn the 1992 graphics hardware with a combination of specific direct calls, I remember vivid discussions with the head of the IT institute at my university fearing I would damage something. Today I think that was a tech myth.
I came across Pascal first time in the mid 1980’s at high school in the IT class equipped with Apple IIe and Apple Pascal. Btw, Pascal is 50 years old in 2020 !

Pascal Code

I remember my very first PC system, a 80386 SX running at 20Mhz, 256kB RAM, equipped with 5.25″ and 3.5″ disk drives, plus a whopping 20MB harddisk, which I thought would provide enough space for many years to come.. I spent DM 2,500.-, today’s equivalent of roughly Euro 2,300.- for this set, inclusive of a 14″ CRT color screen and a Star LC 24-10 dot-matrix printer.

You fancy to run the old systems ? Let’s go, we have a few options at hand.

  1. Original Hardware
    Provided you are willing to spend money on old harware and find an old IBM comp. PC (like a 80386) on Ebay, plus all the installation disks, this is truely the retro nerd way. You going to experience 1990’s first hand with all the slowness and swapping disks, failing stuff, etc. I skip that one.
  2. Virtual Box
    If you still own the original disks (like I do in this case), you can spin up a DOS guest session in Oracle’s VirtualBox and install everything from the scratch. Much faster than option 1, but still a little bit more nostalgia than option 3 and 4.
Windows 3.11
Windows 95

3. DOS Emulator
Save the time creating a virtual PC and install a native emulator on your windows environment. Try DOSBox.

4. Online Emulator
As usual, there is an emulator for everything now, any you can spin up an old piece of hardware in your favourite browser without touching a screwdriver or a disk. Drop by the PCjs website and explore all kind of OS and software from the past with the click of a button.

Conclusion

Fun nostalgia experience exploring the roots of software and hardware we use today. I learned a lot during this barebone hands-on times back then, valuable when looking at today’s IT environment where you are layers and layers away from hardware and the basic understanding how things work under the hood.

There are times you need to spin up these emulator or old OS, when you come across files that are no longer supported by modern OS and software releases. I had to install MS Access 97 in order to read old Access 2.0 databases.

MS Access 97 Installer

Stay tuned for more retro tech exploration..

Podcasts on AI and Data Science

The interest in Artifical Intelligence has exploded over the last few years with hardware and software increasing performance massively, the same time we have data in abundance to work with. Deep Learning is certainly the number one looked for topic in Computer Science. Anyone can do ML/DL now at home, the whole field is both democratized and made accessible. With a regular laptop you can get started easily with a selection of online and local tools/resources, and a huge choice of data at hand (e.g. Kaggle and other datasources), you can scale to process larger datasets either by having more RAM and a GPU installed in your machine, or use paid resources from AWS, Google, Microsoft and others.

The learning curve is steep, many online courses and books are available, maybe too many to choose from. Beyond that, how to stay up-to-date or get more insights ? The good old Podcast (20 years since the term was coined) is a welcome alternative to reading. You can listen during you commute (who is commuting nowadays ?) or any other physical activities. Though I find it a bit hard to get complex technical stuff (algorithm) explained without any visual context but there are still many topics to be covered, ranging for legal and ethical aspects to interviews with practitioners in various fields and many more topics. It is impossible to follow all podcasts out there, but you can subscribe to a few and hand-select the episodes that are of interest or relevance for you.
Here a list of podcasts that I follow which I like to highlight, updated over time. Focus on podcasts that are produced in English language and actively maintained. (Last Update 2020-12-05)

Lex Fridman

Lex is a researcher at MIT, working on autonomous driving, human-robot interaction and all kind of machine learning topics. He appears as quite an introvert character allways wearing a black suite, speaking very calmly without fuzz and excitement but transporting lots of insights to his audience. His interviews cover topics from machine learning, mathematics, philosophy, ethics, astrophysics to plasma physics.

Since 2018 he produced more than 140 episodes of his podcast and it is amazing to listen to the high profile people he invites from the academic world in interviews between 60 to 90 minutes in length. Among his guests were Alex Filippenko, Michio Kaku, Andrew Ng, Ian Hutchison, Kai-Fu Lee, James Gosling, Richard Karp, Elon Musk and many more.

I also recommend to watch his presentation “Deep Learning State of the Art (2020)” from the MIT Deep Learning Series and the accompanying website deeplearning.mit.edu

Episodes: 140+ since August 2018

Podcast Website: lexfridman.com/podcast (all the episodes also available on YouTube)

In Machines We Trust

Running since Summer 2020 host Jennifer Strong and the MIT Technology Review team discuss the more ethical side of machine learning. I highly recommend the episodes about application face recognition and its implication for society.

Episodes: 15+ since July 2020

Podcast Website: forms.technologyreview.com/in-machines-we-trust

Eye On A.I.

Former New York Times correspondent Craig S. Smith runs the audience through a very divers of AI related topics by interviewing various experts.

Episodes: 61+ since October 2018

Podcast Website: www.eye-on.ai/podcast-archive

Practical AI: Machine Learning & Data Science

Chris Benson and Daniel Whitenack are discussing real usecases, datasets and setups of AI exploration. Unlike many other interview type podcasts this is rather hands-on.

Episodes: 115+ since July 2018

Podcast Website: changelog.com/practicalai

The TWIML AI Podcast

In this very actively maintained podcast with new episodes every few days, Sam Charrington is talking to various AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Episodes: 449+ since May 2016

Podcast Website: twimlai.com

The AI Podcast

This podcast is operated by NVIDIA, the biggest player in the GPU hardware manufacturer games, runs talks and interviews with leading experts in the field.

Episodes: 129+ since November 2016

Podcast Website: blogs.nvidia.com/ai-podcast

AI with AI

The podcast moderated by Andy Ilachinski and David Broyles from the Center for Autonomy and Artificial Intelligence, a group inside the CNA (Center for Naval Analyses, research for U.S. Navy and Marine Corps), discusses latest development in the field. The topics are sometimes related to military use of AI, but recent episodes also look into Covid-related topics.

Episodes: 15+ since July 2020

Podcast Website: www.cna.org/news/AI-Podcast

Photo by Austin Distel on Unsplash