Nvidias new neural network creates fake celebrities

Nvidia just released a video showing off its use of a generative adversarial network (GAN) to create high-definition photos of fake humans. The results are impressive and extremely realistic as it uses a database of celebrity photos to generate new faces.

NVIDIA used a special “progressive” method that began the GAN’s tuition on low-res images, and then worked upwards to HD. “This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality” wrote the company of its approach.

This is another great example how powerful neural networks and AI really are and how they can be used for creative applications.

More information: research.nvidia.com

Google’s new wireless earphones with live translation will change the world

Google’s new wireless earphones, called Pixel Buds, designed for use with the company’s new Pixel 2 handset have an extraordinary special feature: If you tell it to “Help me speak Japanese” and then start speaking in English, the phone’s speakers will output your translated words as you speak them. The other party’s reply will then play into your ear through the Pixel Buds translated to your language. Next to this cool feature you have all the standars like issuing commands to Google Assistant on the Pixel 2, have it play music, give you directions, place a phone call and whatnot.

Getting all of the necessary bits and pieces necessary to facilitate real-time language translation into a device small enough to fit into your ear and pocket is no easy feat and is truly amazing. The future we all have been waiting for is finally starting to arrive in the present.

Classyfier – Using AI to match the right music to the right situation

The Classyfier is a table developed at the Copenhagen Institute of Interaction that detects the beverages people consume around it and chooses music that fits the situation accordingly. A built in microphone catches characteristic sounds and then compares these sounds to a catalogue of pre-trained examples.
The Classyfier identifies it as belonging to one of three classes; hot beverages, wine or beer. Each class has its own playlist that one can navigate through by knocking on the table.
The idea behind this project was to build a smart object that uses machine learning and naturally occurring sounds as input to enhance the ambiance of different situations.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

Cymatics – The science of visualising audio frequencies

Inspired by synesthesia and cymatics, Nigel Stanford set out to make a music video where every note has a corresponding visual that is produced by the music that is played.
He teamed up with a video director and the result is an amazing blend of Physics, Technology and music all wrapped up in a music video.
It shows that combining different fields and thinking outside the box can create an very innovate project, which what creative development is all about.

If you are interested in more you can read and watch videos of the making of on his website

Ferrolic: A beautiful living clock

Ferrolic is a clock made out of ferro fluid that is moved by magnets to visualise the changing of time. The dynamic of the movement of the ferro fluid gives the clock a tangible and mesmerizing feel to the passage of time.
The only downside is that the clock has a short lifespan of a few months since the ferrofluid degrades after a while and will stop responding to the magnetic fields.

Still, this is a great example of creatively combining physics, technology and a sense of design to produce an item that looks like a piece of art, but also retains it’s original usefulness.

More information: Ferrolic.com

Google – artificial intelligence generated photography

Google released a paper about using a deep-learning system for artistic content creation. First, the algorithm chooses the best crop out of google street view panoramas. Afterwards, several machine learning based saturation- / HDR-filters and masking were applied by the program.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.


More examples / gallery: google.github.io/creatism/
More information: research.googleblog.com/
mages: Google


First look at Apples Augmented Reality Kit for ios

Within iOS 11 a new augmented reality tool kit “ARKit”, which brings native support for Apple mobile systems, will be released. The public beta of iOS 11 has been available since a few days and this feature is tested widely from the developer community with some very nice results. The tracking seems to be extremely stable and precise. Together with advanced real-time rendering engines, the immersion is fascinating. Just have a look at the videos to get a first impression.

Augmented reality devices like the Microsoft Hololens or the Magic Leap or systems like Apple ARKit or Google Tango are extending the reality with an additional synthetic visual layer and are opening complete new possibilities for guidance, info- and entertainment systems.

More information: apple.com

This new device teleports lemonade

Virtual lemonade sends colour and taste to a glass of water.
A system of sensors and electrodes can digitally transmit the basic colour and sourness of a glass of lemonade to a tumbler of water, making it look and taste like a different drink. The idea is to let people share sensory experiences over the internet.

This shows a very nice approach to reproduce things over long distance by only sending the information of an object. It also could be adapted by many other replication technologies (i.E. 3D printing).

More information: newscientist.com

Google Jamboard: A 55” 4k digital whiteboard

Google recently launched their 55 inch 4k digital whiteboard called Jamboard. It essentially tries to improve and enhance the whiteboard experience by leveraging the current power of 4k and inter-connectivity.

It boasts some impressive features like handwriting and shape recognition, 16 touch points, seamless integration with Jamboard apps on phones and tablets. If you are interested in the full specs you find them here.

The only downside you could argue about is it’s price: $5000. Next to that, you will also be required to pay an annual management and support fee of $600. If the price is no issue then you might just be signing on to the next evolution of a collaborative experience.

Google unveils Daydream 2.0 featuring Chrome VR, YouTube VR and more

One of the major updates slated for later this year is Daydream 2.0 (codename Euphrates), announced by Google during a keynote focused on VR and AR at day 2 of I/O 2017. The standalone VR headset is being developed along with Qualcomm and will feature ‘WorldSense’ tracking tech as well as the latest Snapdragon 835 processor. It will also include two wide-angle cameras along with motion sensors to detect movement, and will most likely ship with a Daydream controller.

Users will be able to use Chrome in Daydream to browse the web while in virtual reality, access WebVR content with full Chrome Sync capabilities and have the possibility to screenshot, capture or cast any virtual experience on to a Chromecast-equipped TV. Separately, Google is also bringing augmented reality features to Chrome for Tango-supported phones. Development will also become much easier with Instant Preview, which allows developers to make changes on a computer and see them reflected on a VR headset in seconds.

The new system will be available on all current Daydream devices later this year, including the Galaxy S8 and S8+ and LG’s upcoming flagship device.