Facebook 3D Posts

Since a few weeks Facebook has released its newest content format: the 3D Post. After 360° panoramas, this is the next step for 3D content. Now it is possible to bring real 3D models with geometry and textures into Facebook feeds, so you can move around an object and inspect it from every angle. If the model is well prepared, it works very nicely with good performance and quality, even at mobile devices. Currently only static 3D models are supported, but it is expected to see this format evolving in the near future. Animations, more interactivity, Augmented Reality and Virtual Reality application within the Facebook context are now in reach. We tried out the production pipeline and together with our social media team are ready to go for your next Facebook 3D Post. If you are interested in this new format, just get in touch with us!

 

More information: developers.facebook.com

 

 

 

 

 

Amazon – Augmented Reality Shopping

Amazon announced that AR (Augmented Reality) view is now available also for Android devices (US only). The service allows people to experience products virtually in their own home. They started last year November with this service on Apple devices and experienced that customers are more likely to purchase the product after using AR view.

Augmented reality devices like the Microsoft Hololens or the Magic Leap or systems like Apple ARKit or Google Tango are extending the reality with an additional synthetic visual layer and are opening complete new possibilities for guidance, info- and entertainment systems.

Artificial Intelligence vs. lawyers – K.O. first round

In a recent study 20 experienced US trained lawyers competed against an AI (Artificial Intelligence) algorithm in spotting risks in business contracts. For the first time, the algorithm beat the lawyers with an accuracy of 94% compared to 85%, archieved by its human colleagues. The AI needed for this only 26 seconds compared to an average time of 92 minutes for the lawyers.

This is showing impressively the progress and application of AI, and is just another mile stone where AI is outperforming humans in a specific task.

More information: lawgeex.com

Vuzix Blade™ – Augmented Reality glasses with Alexa assistant

The Vuzix Blade is what Google Glass always wanted to be. Of course, the Blade isn’t just the second coming of the Google Glass, as Vuzix has learned important things about what you have to do to make the concept of smart glasses easier to accept. They are easy to use and from a 3m distance they look like a normal pair of glasses.

Vuzix is trying to improve their product by eventually implementing Amazon Alexa into their smart glasses. Imagine walking down a street getting real time information about upcoming events near you – and with Alexa by your side – having tickets ready as soon as you arrive…

For more information visit time.com or for even more details about the technical details vuzix.com #vuzixblade #smartglasses #augmentedreality #alexa #ces2018

Convolutional Neural Network Based Level Editor for Virtual Reality

Sam Snider-Held, a creative technologist working at MediaMonks, created a Convolutional Neural Network that can be used as a VR level editor type interface.
The whole project has a Harry Potter feel to it as you draw objects in VR and the neural network can detect it and load the corresponding primitive or asset so you can quickly build objects in 3d.

This is a great example of intersection of AI, Creativity, and 3D content generation and having the experience of drawing objects out of thin air is quite magical.

Flying a holographic drone in your living room

Epson announced the launch of an augmented reality flight simulator that will let you pilot a virtual drone through the physical space around you using the company’s Maverio AR glasses.
There are even a few games built into the app. You can collect candy or fly through some rings. Figuring out how the holographic drone relates to these objects in space definitely highlighted some of the limitations of the simulation.
It shows that combining 2 wildly different fields can give impressive results, and gives you the pleasure of having a drone without breaking the bank.

Augmented reality devices like the Microsoft Hololens or the Magic Leap or systems like Apple ARKit or Google Tango are extending the reality with an additional synthetic visual layer and are opening complete new possibilities for guidance, info- and entertainment systems.

Nvidias new neural network creates fake celebrities

Nvidia just released a video showing off its use of a generative adversarial network (GAN) to create high-definition photos of fake humans. The results are impressive and extremely realistic as it uses a database of celebrity photos to generate new faces.

NVIDIA used a special “progressive” method that began the GAN’s tuition on low-res images, and then worked upwards to HD. “This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality” wrote the company of its approach.

This is another great example how powerful neural networks and AI really are and how they can be used for creative applications.

More information: research.nvidia.com

Google’s new wireless earphones with live translation will change the world

Google’s new wireless earphones, called Pixel Buds, designed for use with the company’s new Pixel 2 handset have an extraordinary special feature: If you tell it to “Help me speak Japanese” and then start speaking in English, the phone’s speakers will output your translated words as you speak them. The other party’s reply will then play into your ear through the Pixel Buds translated to your language. Next to this cool feature you have all the standars like issuing commands to Google Assistant on the Pixel 2, have it play music, give you directions, place a phone call and whatnot.

Getting all of the necessary bits and pieces necessary to facilitate real-time language translation into a device small enough to fit into your ear and pocket is no easy feat and is truly amazing. The future we all have been waiting for is finally starting to arrive in the present.

Classyfier – Using AI to match the right music to the right situation

The Classyfier is a table developed at the Copenhagen Institute of Interaction that detects the beverages people consume around it and chooses music that fits the situation accordingly. A built in microphone catches characteristic sounds and then compares these sounds to a catalogue of pre-trained examples.
The Classyfier identifies it as belonging to one of three classes; hot beverages, wine or beer. Each class has its own playlist that one can navigate through by knocking on the table.
The idea behind this project was to build a smart object that uses machine learning and naturally occurring sounds as input to enhance the ambiance of different situations.

Deep neural networks and machine-learning are key players of artificial intelligence. They are simulating basic information processing of the brain and are more and more used in many products.

Cymatics – The science of visualising audio frequencies

Inspired by synesthesia and cymatics, Nigel Stanford set out to make a music video where every note has a corresponding visual that is produced by the music that is played.
He teamed up with a video director and the result is an amazing blend of Physics, Technology and music all wrapped up in a music video.
It shows that combining different fields and thinking outside the box can create an very innovate project, which what creative development is all about.

If you are interested in more you can read and watch videos of the making of on his website