AI-Created Website for a Groundhandling company

The online community is going wild with all kind of experiments and playtime with ChatGPT, GPT-3, Stable Diffusion, Dalle-E and Midjourney. Social media, Facebook, Twitter, Instagram and even the professional networking platform LinkedIn is full of posts (inclusive the ones I add here..).

Time to experiment further and let the AI create a (mockup) website and some marketing text for our fictitious Airport Groundhandling company called BestEverHandling Inc.

Website Landing Page Mockup

Prompt: create a website for an airport groundhandling services company called “BestEverHandling Inc.”, websites,ux,ui, online

Drafts created by Stable Diffusion

Thats not going to fly…..

Disruption by aircraft design

Drafts created by Midjourney

Frankenstein airframe design..

Final Mockup

Conclusion: We can create a visual mockup with no readable texts, maybe good enough for an initial design proposal or discussion. The images show disturbing corrupted or ‘Frankenstein’ creations of aircraft and equipment.

Marketing Text

We need to create some marketing text for the company. In order to provide some guidance to the language model I use the description from Wikipedia and Law Insider

Prompt for OpenGPT / GPT-3: 
In aviation, aircraft ground handling defines the servicing of an aircraft while it is on the ground and parked at a terminal gate of an airport.Ground handling services means the supply at an airport, on a fee or contract basis, of the following: airline representation, administration and supervision; passenger handling; baggage handling; ramp services; catering; air cargo and mail handling; fueling of an aircraft; aircraft servicing and cleaning; line maintenance; surface transport; flight operations, crew administration and flight planning. Ground handling services do not include self-handling; security; aircraft repair and maintenance; or management or operation of essential centralised airport infrastructure such as de-icing facilities, fuel distribution systems, baggage handling systems, and fixed intra-airport transport systems. 

Create a half page marketing style description of groundhandling services provided by the company "BestEverHandling Inc.".

Marketing text created by ChatGPT

Unlike the website visual mockup we get quickly a quite realistic text proposal.

BestEverHandling Inc. is your go-to provider for all your aircraft ground handling needs. Our team of experienced professionals offers a comprehensive range of services to ensure that your aircraft is serviced, loaded, and cleaned efficiently and effectively.
We offer a variety of ground handling services including:
-Airline representation, administration, and supervision
-Passenger handling, including check-in and boarding assistance-
-Baggage handling and management
-Ramp services, such as aircraft loading and unloading
-Catering services to ensure that your flight crew and passengers are well-fed
-Air cargo and mail handling
-Fueling of aircraft
-Aircraft servicing and cleaning, both inside and out
-Line maintenance services to ensure your aircraft is in top condition
-Surface transport and flight operations assistance
-Crew administration and flight planning

Our goal is to provide high-quality ground handling services to our clients and ensure that your aircraft is ready for take-off as soon as possible. Our team is available 24/7 to accommodate your needs, and we use state-of-the-art equipment to ensure that your cargo and baggage are handled safely and securely. We also use environmentally friendly cleaning products to keep your aircraft spotless.
Choose BestEverHandling Inc. for all your ground handling needs, and experience the difference in quality and reliability. With our expert team, you can be confident that your aircraft is in good hands.

Conclusion: It uses our input and creates text around it that could be used for different businesses, just by replacing the keywords.

Extra: AI-created Text Detection Tools

As quickly as the AI systems to create text have advanced, powerful tools to detect AI-generated text have kept pace. Lets test our text for fake creation.

General implication:

  • We will see more spam and scam created with this tool. Traditional spam still shows a lot of spelling or grammar mistakes, which helps to spot it.
  • In education and academia these detection tools are needed to allow verification of genuine content, similar problem like the good old copy-and-paste content from Wikipedia or other research papers (plagiarism).
  • The creation tools can be used to optimize text and whole websites for SEO purposes.
  • If future search tools going this direction, we wont have any reference check, aka where the merged information is originating from.

GPT-2 Output Detector Demo (free tool, but trained for GPT-2)

GPTZero (free)

Content at Scale (commercial, free test)

Airports as seen by Stable Diffusion

Stable Diffusion is a deep learning model that converts textual input into images, trained with a 5 billion image set (LAION-5B). Released by the collaboration between Stability AI, the Computer Vision & Learning Group at LMU Munich, and Runway AI, the source code and the model weights are publicly available.

I used the Hugging Face hosted model to create some artwork and images in various styles. Below you see 50 images created by the Stable Diffusion model, applied to the term airport and the art style, without further input.

Enjoy the tour !

New: More of this work ? Follow me on Twitter !

Airport Artwork

Renaissance Art

Expressionism

Ukiyo-E

Cubism

Futurism

Symbolism

Art Nouveau

Andy Warhol

Albrecht Dürer

Salvador Dali

Pop Art

Cartoon

Manga

Random Art Styles

Airport Photography

Airport

Groundhandling

Baggage Handling

Vertiport

Honorary Mention

Aircraft in steampunk style

Conclusion

  • The copyright of the creation is with the person creating the image, with the person starting the process, and not with the creator or the model, nor with the artists that created the original art styles or artwork.
  • Every image is unique, but you will see a ot of similar looking results depending on the terms used.
  • Some of the requested styles are not properly applied (from an artistic point of view), especially the exotic ones that quite likely have fewer source images only in the model.
  • A lot of the images have an high level of randomness and and unreal details. Though providing more details in the text, the images tend to become more realistic (see below sample).
  • Artists like Andy Warhol, Pablo Picasso, Salvador Dali spent a lifetime to create art pieces and shaped epochs of art, now is just the base or source for anyone to create something in 5 seconds, just with one click.
  • The technology is certainly amazing and becomes now accessible to everyone to play with it, but is it a creative process? We will see further hype and this kind of artwork popping up in public spaces (airport lounge) without an actual artist being involved.
  • We will see so much of it that it becomes boring and tiring.

“A group of astronauts loading baggage into an aircraft”

ChatGPT and Python in 5 minutes

The launch of OpenAI‘s ChatGPT in late November triggered quite a wave of excitement and attention by media, following the DALL E 2 hype earlier this year. Supposedly OpenAI acquired 1 million users in 5 days. The chatbot is built on top of the GPT3.5 language model and was created using the Reinforcement Learning from Human Feedback approach. The text results are often not distinguishable from a text written by a person and with ChatGPT we can have a conversation and not just a question-answer exchange.

Without jumping further into the world of NLP theory, we are going straight into exploring it, at a Hello World level, as a quick-start.

You have the choice of testing the chatbot by yourself on OpenAI’s website (account and login required) or, which is more interesting, let’s build some Python code to integrate the service into your own app. Please note, at the time of writing, the service is considered beta and free for research or prototyping. Certainly OpenAI will mature and monetize this service eventually and offer the usual pay-per-usage model.


Please note, the below sample code and API implementation is solely calling the GPT3 model

There are a few limitations we need to be aware about:

  • There is no realtime information. You cannot ask for the current weather or stock exchange information etc.
  • Like other language models the training data is huge but still limited.
  • OpenAI tried to improve the previous InstructGPT, but there is still wrong and bias information.
  • There is a variance in responses, a minor tweak to the question can trigger completely different answers.

In order to use the API we need an OpenAI account and an active API-key.

Let’s start up our favouritePython IDE and add then openai package.

To interact with the API is as simple as one method call with a couple of parameters to pass the API key, the prompt/question and the model engine.

completion = openai.Completion.create(
    engine=model_engine,
    prompt=prompt,
    max_tokens=1024,
    n=1,
    stop=None,
    temperature=0.5,
)

I highly recommend reviewing the documentation about the parameters, to choose the right model and the temperature parameter which influences the response accuracy and quality.

I could not resist asking an airport related question (the significance or meaning of the TOBT milestone in the airport ACD-M process). Lets make minor tweaks to the question, keeping the parameters identical and look at the variety in responses.

To provide a proper reference, lets look at the definition that we find in the IATA A-CDM recommendation booklet.

Now, lets challenge GPT3

Explain the significance of TOBT in the flight departure process to me. 

TOBT (Target Off-Block Time) is an important part of the flight departure process. It is the target time for when the aircraft is expected to push back from the gate and begin taxiing for takeoff. The TOBT is determined by the airline and is based on the estimated time for all pre-departure activities, including boarding, cargo and baggage loading, fueling, and any other necessary preparations. The TOBT is an important factor in ensuring that flights depart on time and arrive at their destination on schedule.

✔️ Comment: With the proper context, this is quite an accurate response, though there is no reference to the A-CDM process.


Explain the significance of A-CDM milestone TOBT to me.

A-CDM (Airport Collaborative Decision Making) is a process designed to improve the efficiency of airport operations. TOBT (Target Off-Block Time) is a milestone within the A-CDM process. It is the time that the aircraft is expected to leave the gate and begin its taxi to the runway. It is a key milestone because it sets the timeline for all other activities that must take place before the aircraft can take off. It is also used to coordinate the activities of all the different stakeholders involved, such as the airline, the airport, air traffic control, and ground handlers. By having a clear timeline, it ensures that all the necessary steps are completed in a timely manner and that the aircraft can depart on schedule.

✔️ Comment: The earlier answer with A-CDM context added.


Explain the significance of ACDM milestone TOBT to me.

ACDM milestone TOBT (Time of Beginning of Takeoff) is a critical milestone in the aircraft take-off process. It is the moment when the aircraft begins its take-off roll and is the point at which the aircraft is committed to taking off and can no longer abort the take-off. This milestone is important because it is a key indicator of the airplane’s readiness to take off and is used to determine when the aircraft is airborne and when the aircraft is legally considered to be in flight. It is also used to measure the time it takes for the aircraft to take off and the amount of runway used.

Comment: The missing hyphen sign in “ACDM” generates a wrong answer, plus some creativity, inventing a new milestone “Time of Beginning of Takeoff”.


What is airport TOBT ?

Airport TOBT (Turn Around Time) is the amount of time it takes for an aircraft to arrive at an airport, unload passengers, refuel, and depart again. It is typically measured in minutes, and is used to measure the efficiency of an airport's operations.

Comment: Here, we mix the turnaround definition into the response.


Explain the significance of TOBT to me.

TOBT (Time of Beginning of Task) is an important concept in project management. It is the point in time when a task begins and is used to measure the progress of the task. TOBT is used to create a timeline for the task and to track its progress. It is also used to help determine the costs associated with the task, as well as to identify any potential delays. TOBT is an important tool for project managers to effectively plan and manage their projects.

Comment: With no context provided, the response is pointing to a project management term.


Conclusion

  • The above test is by no means exhaustive or academic, rather a random test. But this way a person without any prior knowledge might interact with the chatbot to seek answers.
  • Minor changes lead to significant changes and even wrong answers. There is no transparency, no reference is provided, unlike Google, where you see the source of the information.
  • The above responses are not copied but assembled, note the differences in the explnation of the same term.

IoT Working Bench – Where the ideas materialize.

What is so amazing about IoT ?
You can get started easily with very little budget to work with microprocessors, single-board-computers and all kinds of electronics, like sensors and more. For the standard kits we discuss here there, lots of online documentation, books and web-sites are available, even interested people with very little IT or electronics knowledge or students at secondary schools can get some hands-on with easy projects.

With a simple workbench, you can do prototyping and evaluate before you even consider going in series, or maybe just build a dedicated one-off device.

Microprocessor and SBC

ESP32

The ESP32 SoC (System on Chip) microcontroller by Espressif is the tool of choice aiming for a small footprint in terms of size (the chip itself measures 7x7mm), power consumption and price. It supports a range of peripherals, I2C, UART, SPI, I2S, PWM, CAN 2.0, ADC, DAC. Wifi 802.11, Bluetooth 4.2 and BLE are already onboard.

The benefits come with limitations though, the chip is operating at 240Mhz and the memory counting in KiB (320 KiB RAM and 448 KIB ROM). Memory consumption has to be designed carefully and a conservative approach towards running the device in various live and sleep modes, it can consume as little as 2.5µA (hibernation) but can draw as well 800mA when everything is running in full swing with Wifi and Bluetooth enabled. The ESP32 and its variants teach you proper IoT design. You can buy the ESP32 as NodeMCU Development Board for less than Euro 10,-.

Arduino

The Arduino history goes back to 2005 when it was initially released by the Interaction Design Institute Ivrea (Italy) as electronics platform for students. Released into the wild as open source hardware for over 15 years, there is a huge user community, plenty of documentation and projects ready to replicate.

The Arduino, even somewhat similar to the ESP32 (Arduino being not as powerful, slower and less memory than the ESP32), is more beginner friendly. The coding is done with sketches (C language) uploaded to the device via USB, logic similar to Processing.

If your project has anything to do with image, video or sound capturing, the Arduino (and the ESP32) is not the right choice, choose the Raspberry Pi as the minimum platform.

The Arduino has a price tag between Euro 10,- to 50,- depending on the manufacturer and specs. For education purpose you find it packaged together with sensors and shields for basic projects.

Raspberry Pi

The Raspberry Pi (introduced 2012) is the tool of choice if you need a more powerful device that runs an OS, can be connected to a screen, supports USB devices, provides more memory and CPU power and easy-to-code features. Connected to a screen (2x HDMI) it can serve as a simple desktop replacement to surf the web, watch movies and do office jobs with LibreOffice for regular user profile.

The current Raspberry Pi 4 ranges between Euro 50,- to 100,- (inclusive of casing and power supply).

Edge or ML Devices

These devices are similar to the Raspberry Pi platform in terms of OS, connectivity, GPIO’s etc, but leaning more towards serious data processing ML inference at the edge.

NVIDIA Jetson

NVDIA launched the embedded computing board in 2014 and has released several new versions since then. The current one is the Nano 2GB Kit, purchase it for less than Euro 70,-. Together with all the free documentation, courses, tutorials this is a small powerhouse which can run parallel neural networks. With the Jetpack SDK it supports CUDA, cuDNN, TensorRT, Deepstream, OpenCV and more. How much cheaper can you make AI accessible on a local device? More info at NVDIA.

Coral Dev Board

The single-board computer to perform high-speed ML inferencing. The local AI prototyping toolkit was launched in 2016 by Google and costs less than Euro 150,-. More info at coral.ai.

Sensors

There is a myriad of sensors, add-ons, shields and breakouts for near endless prototyping ideas. Here are a few common sensors to give a budget indication.

Note (1): There is quite a price span between buying these sensors/shields locally (Germany) and from the source (China), it can be significantly cheaper to order it from the Chinese reseller shops (though it might takes weeks to receive the goods, and worse you might spend time to collect if from the customs office).

Note (2): Look at the specs of the sensors/shields you purchase and check the power consumption (inclusive of low power or sleep modes) and the accuracy.

GY-68 BMP180Air pressure and temperature.
SHT30Temperature and relative humidity.
SDS011 Dust Sensor (PM2.5, PM10)
SCD30 CO2
GPS Geo positioning using GPS, GLONASS, Galileo
GY-271 Compass
MPU-6050Gyroscope, acceleration
HC-SR04Ultrasonic sensor
The authors IoT Working Bench

Some devices on the above image: Raspberry Pi4B, Arduino (Mega, Nano), Orange Pi, Google Coral Dev Board, NVIDIA Jetson Nano, ESP32, plus a few sensors/add-on’s like Lidar, LoraWan, GPS, SDS30 (Co2), BMP 180 (Temp, Pressure), PMSA0031 (dust particles PM2.5, PM10), microstepper motor shield.

What else do we need ?
Innovative ideas, curiosity to play, experiment, willingness to fail and succeed with all kinds of projects.
A 3D printer comes in handy to print casings or other mechanical parts.

Next Steps
The step from prototyping in the lab to the mass-production of an actual device is huge, though possible with the respective funding at hand. It makes a big difference to hand-produce one or a few devices that you have full control over and manufacturing, shipping and supporting 10.000’s devices as a product. You have to cover all kinds of certifications (e.g. CE for Europe) and considerations to design and produce the device by a third party (EMS).

Another aspect is the distribution of IoT devices on scale. A device operating in a closed environment. e.g. consumer appliances that solely communicate locally does not require a server backend. Certainly devices deployed at large, e.g. a fleet management system or different type of devices, it is recommended to use one of the IoT platforms in the cloud or locally (AWS, Microsoft, Particle, IBM, Oracle, OpenRemote, and others).

Stay tuned..

Taming the beast – Some GPU benchmarking

Resuming with the setup and benchmarking of the RTX 3080TI. After the initial basic 3D rendering FPS-tests, time to get the hands dirty with some ML tests. Before trying to benchmark the GPU, we need to get the required Tensorflow packages and NVDIA toolkits up and running under Windows.

For this setup we assume we have Windows 10 and we will use PyCharm as our Python IDE.

The required NVIDIA basic ingredients :

  1. Download and install the latest driver for the GPU from the NVDIA download. The CUDA toolkit requires a minimum driver version (more info).
  2. Download and install the CUDA toolkit (link) (at the time of this post, version 11.6)
  3. Download and install the cuDNN library (link). Beware, there is a dependency between the versions of cuDNN and CUDA. I was not able to make the latest version of both (cuDNN 8.3.1 and CUDA 11.6) to work for our Tensorflow setup.
    Download the latest 8.1.x version of cuDNN instead.

Following the official installation guide (adding the insghts from some blogs and forums), we still have to make some manual changes to our system.

  • Copy relevant library files from the cuDNN zipfile content to the respective CUDA path folders.
  • Ensure the relevant path are setup in the Windows system settings for environment variables.

With this we can start PyCharm, create a project and embedd the Tensorflow packages. If you choose a different method, make sure you use virtual environments, the packages sum up to up to 2GB coming with their potential dependency problems interfering with other projects in case you share packages.

Lets pip-install the Tensorflow-GPU package and check if the GPU is found.

import tensorflow as tf

tf.config.list_physical_devices("GPU")

if tf.test.gpu_device_name():
    print(tf.test.gpu_device_name())
else:
   print("No GPU.")

gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
  details = tf.config.experimental.get_device_details(gpu_devices[0])
  print(details)

We also can install some of the basic ML packages and verify them.

import sys

import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf

print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")

The final step would be to perfrom some kind of ML benchmark performed by the GPU. Doing a quick search I found only one easy applicable solution at the website ai-benchmark.com. Unfortunately, the page, created by some people from the CV Lab at the ETH Zurich, is no longer maintained (no current cards in the rating list). We still can use the package and do a basic scoring.

Comparing, by running the same test, the RTX 3080TI with the CPU AMD Ryzen 1700 (left side in the screenshot).

This provides some order-of-magnitude but hard to say if this a proper proper approach.

More real hands-on exercises will come up soon. Stay tuned..

Unleashing the beast – Paving the way to the Omniverse

Looking back at 30 years of experiencing firsthand the (graphics) hardware (r)evolution, it is thrilling to enjoy such a high level of realtime rendering quality, high resolution and performance, available today. My career literally started off in the era of 256KB VGA graphic adapters (operating with bit block transfer), moving from 320×240 and 640×640 to SVGA 800×600. Some of you remember brand-names like S3, ATI, Diamond and others, 25 years back when a 80486DX motherboard had 6x ISA expansion slots for a variety of sound/network/xyz adapters (now down to a single PCIe slot for the GPU).
Today the battle is fought between AMD and NVDIA. The evolution never stopped, I tagged along investing in newer display adapters over the years. Since the late 2000s I stick to NVIDA starting with GeForce 200 Series (GTX 260) and GeForce 10 Series (GTX 1060) and now getting my hands on a real graphics powerhouse.

I managed to purchase a RTX 3080TI. I skip the discussion of the current problems of the industry to produce and ship sufficient parts and the attached price development.
Released in June 2021, running on the Ampere microarchitecture, supporting DirectX 12 Ultimate, 12 GB of GDDR6X memory, using 10240 cores, theoretical pixel rate of 186 GPixel/s and 34 TFLOPS FP32 (complete specs here and here), this card is quite a power-house, only the 3090 being more powerful in this line (3090TI announced in Jan 22, but both unobtainable and most likely absurdly expensive).
Comparing the 3080TI (unfairly) with my 13-year-old GTX 260, released in 2008, reveals the order-of-magnitude in Moore’s law. Running only 896 MB memory, 192 cores, 16 GPixel/s, 0.48 TFLOPS FP32. If this makes any sense to compare, the RTX 3080TI would outperform the GTX 260 by ~3000% (link).
Fun fact, architecture name line-up since 2008: Tesla, Fermi, Kepler, Maxwell, Pascal, Volta, Turing, Ampere.
The RTX 30 series is targeting the high-end gaming consumer market, but its specs are close enough to the GPU workstations and data centre GPU’s, to serve personal research usage at home. Unfortunately, the RTX 30 series is still very much in demand by the everlasting Bitcoin and Etherum mining ratrace, LHR was supposed to defer the usage for mining purposes, though it seems the miner community found ways to bypass the protection (source).

After some further upgrades, inclusive of power supply update (850W to be on the safe side) and more SSD and M2 disk space, the rig is ready for a first performance test using 3DMark running Time Spy.
(Top Score in the hall of fame with a single GPU is 28473 at the time of writing)

I am setting up some test scenarios with Tensorflow to run on the GPU. Will discuss this in another post.

Eventually all up and running, I managed to do my first steps in the Omniverse, get to know the concept, the components and plugins. There is plenty to learn and experiment with, all for free.

NVDIA Omniverse Launcher

Will keep you posted about my adventures in the Omniverse. Especially interested in the context of Digital Twins. Stay tuned..

Thoughts about the Metaverse

Metaverse is increasingly trending since Mark Zuckerberg announced (Oct 28th 2021) both the rebranding of Facebook to Meta and the next big thing, the “Metaverse”

As much as I enjoy seeing technology maturing, being democratized and becoming accessible, I also want to stay realistic at the same time. Some reflections about the current hype or the next evolutionary step in human interconnectedness.

Photo by Lucrezia Carnelos on Unsplash
  • The Metaverse emphasises on VR and AR as medium to immerse yourself. VR has seen several waves of adoption since 1970, growing from research lab exclusive use to a mass consumer product. But until today, the general adoption has not grown significantly outside the gaming and simulation niche.
  • While several expensive high-end headsets have been released and announced to enterprise customers (Varjo, Pimax, XTAL,..) there is not much in the consumer space, the Quest 2 was released in 2020 (overview). Though everyone suddenly is working on something (Apple,..). If the Metaverse is the next internet accessible by everyone, we need to have devices as cheap as mobile phones. And NO, Google Cardboard is not an option.
    AR has still long way to achieve mixed reality with seamless embedded information. In 2020 AR disappeared from the Gartner hypecycle in 2020, even predicting enterprise adoption in 2021 (didnt happen?).
  • The human bioware is not being updated. Newer VR devices are getting better, more lightweight, higher resolution, less latency etc., but VR fatigue and VR sickness are still an issue. Though you can get used to it but it still will affect the adoption. You choose the wrong environment or platform to get started into VR and it spoils your first experience, you might leave for good. I know few people being “in VR” for more than 1 hour regularly.
  • Believing this is the next step in the evolution, why should we solely rely on the company META, their potential influence on behaviour and opinion will grow further. Right now, the industry should discuss standards for seamless interopability, security and data exchange, ensuring the Metaverse will not become a separate, propietary internet, but an accessible communication and sharing platform, like the internet itself in its beginning. If we would had a proprietary approach in the 1990s, HTML would not be readable today, rather a binary blob to open in the browser, open source might not be as widespread as we see it today. The Metaverse must be open, no matter what hardware or platform is used to access it.
  • META has not yet released Horizon Home, the video material we see is conceptual work and visions (‘Not actual images. Images are strictly for illustrative purpose only.’), solely the Horizon Workrooms are available as beta (at the time of writing this post), and only compatible with the Quest 2 (don’t even works with Rift S). You can use flat screen access though, which makes little sense to me. The Quest 2 will not be able to render the illustrative concepts, except could stream hig-end rendered content.
  • The same time NVIDIA comes with their take of the Metaverse toolset, Omniverse, but with existing products and plugins and a tangible roadmap.

Conclusion:

  • Lets’s stay excited, but realistic. Embrace innovative ideas to come.
  • Ensure it will be the Open Metaverse.
  • Do good and avoid evil. Not implementing the dystopian future depicted in the referenced literature (Snow Crash and others)
  • I am eager to try, experiment and pilot. Especially in the enterprise context, there are use-cases for Digital Twin, Simulation and Collaboration which make sense and will benefit.

Recommended reading:

Google Trends

Thin Client Revival for Generated Art

Part 1 – Hardware

I am experimenting with generated art once in a while for a couple of years now. It allows me to cross the barrier between coding business systems and the world of art, literally creating software that serves absolutely no sincere business value but creating artistic enjoyment. Using the Processing environment (/library/programming language) it is amazing what fantastic visuals you can produce with little code. Note, Processing runs in its 20th year now, long time before we got into the current hype of AI generated art using GAN‘s (Generative Adversarial Network) etc and people making money with NFT (Non-Fungible Token). To be precise, Processing is more a tool for procedural art, good old algorithm creating visuals spiced up with randomness or picking up external actors (e.g. webcam). Today I wont discuss NFT’s or if it makes sense to buy a JPG file for millions of dollars, nor will I talk about GAN art based on deep learning, like Style transfer and similar (another post will cover that).

How to make generated art accesible to an audience outside the browser? With traditional means we would print the art piece, frame it, hang to the wall. This will limit us to static pieces, but we aim for the creation process and animated pieces as well. I started to work on a setup that runs as an art installation using screens and projectors, people in a public space can observe and witness the process of a piece being created or interact with it. I like the uniqueness of each visual using some kind of randomness as parameter. Whatever you see will disappear forever once the screen moves on (provided no screenshot or print created), the exact same thing you will not see again, though very similar creations coming out of the same alogorithm.

Lets look at the hardware. How to do this with little money ? We need a CPU, an OS, a screen and a stand.

Thin Client

Lets revive thin client hardware that you find for a few dollars on Ebay, usually devices which spent their previous life in an ATM, POS or behind a Checkin-Counter at an airport. Once retired after a few years this kind of equipment gets recycled or find its way into the electronics second-hand market (and hopefully not in landfills or recycle yards in Africa). Using Linux as OS we can use most thin clients built after 2010 with 64bit architecture (32bit no longer supported by Debian based systems), with 1 or 2GB RAM and at least 8GB diskspace. Since we run some graphics here we need a least a decent performance. I found the Fujitsu Futro S920, launched around 2013 with the AMD G-Serie GX-415GA 1.5Ghz Quad-Core CPU, 4GB RAM DDR3 and AMD Radeon™ HD 8330E as graphics adapter, which even supports OpenGL 4.1. All for Euro 29,- inclusive the power adapter. Energy consumption around 10 Watts. Replace the 2GB mSATA drive against a 16 or 32GB for another Euro 20,-.

One could argue, why not using a Raspberry PI ? With a proper casing and power adapter I would reach almost Euro 100,-.

Fujitsu FUTRO S920

Linux OS

Debian based OS are my choice. Using the Lubuntu distro we use a small memory footprint and decent diskspace requirements.

Screen and Stand

For the screen I sourced 40″ screens, grade B returns for roughly Euro 100,-, another way to keep this project sustainable by giving electronic equipment a second life. Now comes the handicraft challenge, building the TV stand. I prefer a portrait setup, a professional stand is easily Euro 200,-. Some iron square tubes, basic welding knowledge and some paint do the job. Material spent per stand about Euro 40,-.

This could even backup as super-low budget FIDS screen setup.

I managed to build the whole setup for less than Euro 200,-. Now time to get it ready for public display.

Final Setup (on display piece’sandstorm’ transformed version by the author, original by Sayama, CC BY-NC-SA 3.0)

A small desktop version made from scrap metall for a 22″ screen

In the upcoming part 2 I will talk about the software setup of the installation as well share some insights about processing.

Stay tuned..

Bookshelf: AI 2041

Another recommended book for the Holiday break. I came across this title listening to the Nvidia Podcast (which I also highly recommend). How will artificial intelligence change the world over the next two decades ? In 10 stories Kai-Fu Lee explores the future with a blend of science and fiction, making it more accessible to non-tech readers. CO-authored by Chen Qiufan who created the fictious parts. The book was only released in last September (not yet available in German language). Every chapter brings up complex AI topics and hotly debated issues, ranfing from Deep Learning, VR, Self-Driving Cars to Quantum Computing. The non-fiction review of AI concepts analyses and describes how technology works. It reminds me reading books of Isaac Asimov 30 years back.

If you have read ealier books of Kai-Fu, like ‘AI Superpowers: China, Silicon Valley, and the New World Order’ or ‘My Journey into AI..’, this is my recommendation for you.

Get your copy from your favourite book dealer or online. Check out the book website here.

#RetroTech; 80s Home Computer again

I have fond memories of my first steps into computing in the 1980s, when home computing took living and study rooms by storm. For the first time, computing became widely accessible and affordable for everyone. I have only one original device at hand, so we will explore alternative retro options to go down the memory lane and also visit some of the other home computing platforms. The retro craze goes through various technology trends, people start to value music played by HIFI LP player and pictures taken by analog photography equipment again, others collect old computing equipment and video games consoles. The market reacts to this demand and you can re-buy the old technology again (usually packing emulators on modern chipsets into the old casings), like Sony was relaunching the PS1, Nintendo the NES or Atari the 2600 console. Prices for authentic old equipment are raising too (recommended NY Times article). In this post we will have a look at the Commodore C64.

Relaunched Commodore C64 in original case

First things first, you do not need to buy any equipment for a brief visit to the home computing past, all can be done in the browser or with emulation tools on any regular notebook or Raspberry Pi. The Commodore C64, my first own computer in 1984, I sold 1991 to finance my first IBM comp. PC. But with all the nostalgic memories attached to it, I bought a retro set from Retro Games Ltd., for roughly Euro 100,- (see above image), just for the sake of the physical look and feel of it (Note, no Commodore logo or trademark used, which was sold and passed on multiple times until today). You could achieve the same by installing RetroPie, which can almost any home computing and game console of the 80s and 90s.

The Sinclair ZX81

Before looking at the C64, a quick look at the Sinclair ZX81, which I temporary used (borrowed from a schoolmate) for about a year and to do my first computing explorations. This device was released in 1981 by Sinclair Research, a very basic device coming with 1KB (!!!) memory, a Z80 CPU at 3.25Mhz, running Sinclair Basic and supporting only a 24 x 36 character monochrome screen resolution (using a regular TV set). Everything-included-in-the-box and the user input was nothing but a pressure-sensitive membrane keyboard. An absolute nightmare for any serious typing, not to say development, but it was the only thing at hand.

Image by Evan-Amos – CC BY-SA 3.0

It did support an external add-on 64KB memory adapter, a cashier-style small printer and the only way to load and store programs was on regular audio tapes at 250bps. If you are keen to give it a spin, drop by this website.

3D Monster Maze by Malcolm Evans in 1981

There was no way to compile applications, so all the commercial tools and games came automatically as open source.

ZX81 Basic Source

The Commodore C64

The famous blue launch screen and the command to start the first app on the disk

The Commodore 64 (aka C64, CBM 64) was definitely THE home computing device of the 1980s. By far the biggest number sold compared to similar devices in the market.

Several extensions and additional hardware made the device quite universal, even allowing non-gaming activities like text processing.

A few software Highlights

Microsoft Multiplan

Believe it or not, the great-grandfather of Excel was released in 1982 by Microsoft itself. Very painstaking to use, absolutely the worst possible UX.

Multiplan on the C64
Wikipedia: Multiplan
Data Becker

Once famous German publisher Data Becker had a series of office applications like Textomat, Datamat and other xyz-mat.

Source: c-64.online.com

Infamous also their books about any C64-related content, like programming and applications of all kind.

Cover der 3. überarbeiteten Auflage 1985
Source: c64-wiki.de
GeOS Commodore C64

Launched in 1986 (One year after Microsoft introduced Windows 1.0) Berkeley Softworks released GEOS (Graphic Environment Operating System). Don’t forget, this is a graphical OS on a 1Mhz 64kB 6502 processor! I specifically bought a mouse to use it. Fun facts: Nokia used it for their Communicator Series before switching to EPOC. Plus, the source code was reverse-engineered and made publicly available on Github.

GEOS for the Commodore 64
Wikipedia> GeOS
Sublogic Flight Simulator II

Anyone remembers the Flight Simulator 1 by Sublogic released in 1979 ? State-of-the-art at that time, looking at the hardware inside an Apple IIe, but a terrible flying experience in a wireframe landscape,.

Wikipedia: FS1 Flight Simulator

The sequel Flight Simulator II came with major improvements, colors and real-world sceneries. What a quantum leap that kept me flying for hours. Dont forget to look the glasses of someone living in the 80’s, if you compare this to the latest MS Flight Simulator, it looks like a joke.

Wikipedia: Flight Simulator II (Sublogic)
Wikipedia: Flight Simulator II (Sublogic)

Other Home Computing Devices from the 80s

Many other home computing devices tried to conquer homes in the 80’s, most of them not even remotely as successful as Commodore.

Amstrad CPC 464, with CTM644 colour monitor
Wikipedia: Amstrad CPC
ZXSpectrum48k.jpg
Wikipedia: Sinclair Spectrum
Atari 1040STf.jpg
Wikipedia: Atari ST
Apple IIe.jpg
Wikipedia: Apple IIe

Conclusion

There is quite some excitement about old technology, mostly for sentimental reasons. It allows us to have a little time travel trip in the past. Sadly to say it won’t keep you entertained very long, the memories feel better than experiencing it again.