Cymatics – The science of visualising audio frequencies

Inspired by synesthesia and cymatics, Nigel Stanford set out to make a music video where every note has a corresponding visual that is produced by the music that is played.
He teamed up with a video director and the result is an amazing blend of Physics, Technology and music all wrapped up in a music video.
It shows that combining different fields and thinking outside the box can create an very innovate project, which what creative development is all about.

If you are interested in more you can read and watch videos of the making of on his website

Google – artificial intelligence generated photography

Google released a paper about using a deep-learning system for artistic content creation. First, the algorithm chooses the best crop out of google street view panoramas. Afterwards, several machine learning based saturation- / HDR-filters and masking were applied by the program.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

image9

More examples / gallery: google.github.io/creatism/
More information: research.googleblog.com/
I
mages: Google

 

First look at Apples Augmented Reality Kit for ios

Within iOS 11 a new augmented reality tool kit “ARKit”, which brings native support for Apple mobile systems, will be released. The public beta of iOS 11 has been available since a few days and this feature is tested widely from the developer community with some very nice results. The tracking seems to be extremely stable and precise. Together with advanced real-time rendering engines, the immersion is fascinating. Just have a look at the videos to get a first impression.

Augmented reality devices like the Microsoft Hololens or the Magic Leap or systems like Apple ARKit or Google Tango are extending the reality with an additional synthetic visual layer and are opening complete new possibilities for guidance, info- and entertainment systems.

More information: apple.com

This new device teleports lemonade

Virtual lemonade sends colour and taste to a glass of water.
A system of sensors and electrodes can digitally transmit the basic colour and sourness of a glass of lemonade to a tumbler of water, making it look and taste like a different drink. The idea is to let people share sensory experiences over the internet.

This shows a very nice approach to reproduce things over long distance by only sending the information of an object. It also could be adapted by many other replication technologies (i.E. 3D printing).

More information: newscientist.com

Google unveils Daydream 2.0 featuring Chrome VR, YouTube VR and more

One of the major updates slated for later this year is Daydream 2.0 (codename Euphrates), announced by Google during a keynote focused on VR and AR at day 2 of I/O 2017. The standalone VR headset is being developed along with Qualcomm and will feature ‘WorldSense’ tracking tech as well as the latest Snapdragon 835 processor. It will also include two wide-angle cameras along with motion sensors to detect movement, and will most likely ship with a Daydream controller.

Users will be able to use Chrome in Daydream to browse the web while in virtual reality, access WebVR content with full Chrome Sync capabilities and have the possibility to screenshot, capture or cast any virtual experience on to a Chromecast-equipped TV. Separately, Google is also bringing augmented reality features to Chrome for Tango-supported phones. Development will also become much easier with Instant Preview, which allows developers to make changes on a computer and see them reflected on a VR headset in seconds.

The new system will be available on all current Daydream devices later this year, including the Galaxy S8 and S8+ and LG’s upcoming flagship device.

WebVR: Mozilla’s VR for the browser

Mozilla, makers of the popular Firefox browser, now offer immersive room-scale VR through a web browser and without downloads or installs. Introducing WebVR and A-Frame. That’s right. The apps in the platform can run within cheap smartphone headsets or more powerful sets such as Oculus Rift and HTC Vive. Both the Javascript API platform and HTML framework are open source and require no linear algebra or programming languages like C++ to develop.

Sharing is as fast and simple as sharing a web page, and it’s open to anybody,” said Sean White, senior vice president of emerging technologies at Mozilla. The new platform is expected to grow considerably in the next five years, providing new VR experiences in the fields of education, creative expression and product development.

 

Adidas and Carbon Launch First Tailored 3D-Printed Sneakers

Adidas has teamed up with 3D printing startup Carbon to mass produce its latest sneaker, the Futurecraft 4D. While 3D printers are generally not designed for manufacturing scale and lack the production-grade elastomers needed for a demanding athletic footwear application, Carbon’s rapid product development process enables adidas to iterate over 50 different lattices for the midsole before landing on the current design.

This partnership exemplifies how new technologies and materials are paving the way for custom, high-performance products that meet the unique needs of each customer.

Nadja – a chatbot with emotional intelligence

Nadja was developed from the Australian government to improve their National Disability Insurance Scheme, a service for people with disabilities. The bot helps to find information and is making it accessible in a more human way. The bot is able to read emotions through a webcam and is reacting them in a subtle way. Like AI, EI (emotional intelligence) is getting better and is learning from being used.
The technology behind this is by the company Soul Machines and the voice is Cate Blanchett’s.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

more information: www.soulmachines.com
via: thenextweb.com

Real time facial projection mapping

This collaborative work of YABAMBI, Ishikawa Watanabe Lab, The University of Tokyo,TOKYO and Nobumichi Asai (WOW) is showing a performance, where a 1,000 fps projection system combined with super speed sensor is used to produce an outstanding immersion.

This demo is a nice example what can be achieved with latest sensor and projection technology. Also it can be very well adapted to many other appliances.