Blog
/

Issue #2: Dimensional Dispatch

Stay engaged.

Sign up to receive our ongoing email updates covering announcements, case studies, product releases, and general hologram tips.

Nikki here 👋🏼 back with the second edition of Dimensional Dispatch!

We’re in the lead up to this year’s SIGGRAPH (one of the biggest 3D graphics conferences where Pixar made their debut a few decades ago) so naturally, we’ve been seeing a bunch of research papers released over the last week (and weeks to come). We’re expecting to see huge leaps in research around 3D capture, especially around Neural Radiance Fields (NeRFs) as you’ll come to see in the sections below combined with advancements in the field of Machine Learning (ML). While this issue will scratch the surface of some of those findings, we’re sure that Issue 3(D, lol) will be even more chock full of SIGGRAPH specials.

I want to start by saying thank you to everyone who reached out about our first issue. We weren’t so sure how y’all would like this new format but it seems like we’re off to a good start. The last issue wrapped up with a shoutout to community member Jake Adams (aka Valholo) who had just commenced his course Art Through the Looking Glass. We were happy to see his work being covered by 80.lv just yesterday on the heels of launching his Looking Glass app Aphid: Through the Looking Glass. This has been a labor of love for Jake who has worked tirelessly over the last two years to put this holographic comic book together. I can safely say that this is by far one of my favorite applications in a Looking Glass and I promise you that your jaw will drop if you experience it, too.

If you want to support Jake, here are some simple ways to do that: 

  • Read and share his feature on 80.lv. In this piece, Jake goes through an extensive walkthrough of how he conceived of the idea all the way through to how he executed on it.
  • Download and experience Aphid: Through the Looking Glass for your Looking Glass on itch.io.
  • Sign up for his course Art Through the Looking Glass here, and learn how to make your own holocomic.
  • Watch, like, and subscribe to this Creator Spotlight we did on our YouTube channel with Jake here.

Read on to find out what our team has been reading, sharing and talking about. Let us know if there’s anything we might have missed or you think was worth sharing!

-Nikki @ Looking Glass

This week in Generative AI

Unless you’re living under an rock (and if you are, good for you, maybe stay there for a bit?), our friends at Runway released their improved Gen-2 image→ video feature. While the ability to generate video from image has been out for a couple months now, this recent release provides marked improvements on the quality and coherence of image-to-video outputs - in others words, you can now get better results when using just an image for Runway’s Gen-2 to generate a video out of — no additional text/prompting required!

This is some early generative work created by community artist Purz (@PurzBeats) generated using Stable Diffusion & Davinci Resolve's depth map tool made in October 2022. We've since come a long way but exciting to see what some of the earlier experiments look like.

Meanwhile, since Runway launched this new capability, we've seen some incredible experiments with the new version, including this one by @bryanf0x. Brb, while I rummage through some old photos of my grandparents.

While experimenting, I completely forgot that Runway has a direct 'export as RGB-D video' function.... more experiments coming that way soon. (p.s. in case you forgot, Looking Glass Studio can import RGB-D videos for your Looking Glass so....)

An Unreal Week

  • Unreal Engine announced the new 5.3 preview, with significant improvements to Lumen and Nanite for VR rendering, we’re excited to see the continued performance and fidelity improvements here. More here.
  • Luma AI’s Unreal Engine plugin v0.3 is out now. This update allows you to extract a specific region of your NeRF with updated crop controls and also some new quality controls where you can now scale render quality to match your use cases (from fast rendering all the way up to cinematic). You can try that today here.
  • The Looking Glass Unreal Plugin now supports Unreal Engine 5.2 and has improved support for Lumen and Nanite. Download the update here.

We are of course paying close attention to all Unreal updates over at Siggraph next week. A full schedule of their sessions can be found here.

Papers, papers, papers! 

With SIGGRAPH drawing in close, as expected, NeRFS take the spotlight in some of the papers we’ve been collecting and sharing internally. Here are some of the ones that stood out to us over the last couple weeks:

  • Just a few years after Berkley engineers first debuted NeRFs, another team at Berkley has created a development framework to help speed up NeRF creation with Nerfstudio, an open-source Python framework that provides plug-and-play components for collaborating on NeRF projects. The team will be presenting their paper on Nerfstudio at Siggraph. (🔗)
  • Introducing Seal-3D: a solution the enhances the editability and preview speed of NeRFs — just as you would pixels in Photoshop. Learn how this tool combines innovative strategies to offer pixel-level 3D editing and real-time previews, promising a significant leap in the field of 3D content creation and processing. (🔗)
  • While NeRFs are picking up steam, we’ve always been faced with a dilemma of the trade-off between quality and efficiency. The paper claims that Tri-MipRF method achieves state-of-the-art rendering quality with the efficiency that’s not seen in other methods. (🔗)
  • Researchers found a neat solution for 3D printing anisotropic patterns, essentially encoding a bit of the 3rd dimension in 2D. (🔗)

What we’re reading

  • Adobe Chief’s Strategy Officer, Scott Belsky, spoke recently about the transformative impact of generative AI on creativity and work and how tools like DALL-E and generative AI has effectively “lowered the floor of the creativity box.” It pulls me back to a quote from Seymour Papert that I often come back to which is, “low floors, high ceilings” — most often used to describe tools that are easy to learn, but powerful once you grasp the ability of it. In his talk, he also stresses that the future for AI will see much more personalized experiences for users (i.e. customers) and that companies should start venturing into these channels with an empathy-first approach. I was immediately intrigued by this piece given that since late last year, Adobe has faced some serious roadblocks in it’s exploration in generative AI given it’s existing user base of artists and designers who may have controversial opinions about this space. Read the full talk by Belsky here.
  • A little more than a week ago, it looked like Apple might finally be exploring the AI chatbot bandwagon and just a week after that, it was reported that Amazon announced the expansion of Bedrock with conversational agents. Just this morning, as we were getting ready to hit publish, it is reported that Meta is now potentially entering the arena with AI powered chatbots, seeing them as a way to boost engagement on their social platforms. While the Facebook & Apple reports are still purely speculative, there is no shortage of news coming out from MAMMA companies about integration personalized chatbots into their products. Let’s see where this adventure takes us.

Celebrating our friends

  • Volograms recently announced becoming an 8th wall platform partner to bring 3D volumetric holographic messaging and videos directly to a WebAR experience near you. We’ve always been huge fans of the Volograms team and their work around making 3D more accessible. Go check out their work here.
  • We’ve been talking about NeRFs quite a bit but it’s for good reason — we promise! Check out this insane capture by Nick St. Pierre using Luma with his iPhone. (As you can discern, we’re particularly obsessed with how NeRFs have the ability to capture light and reflections the way other 3D capture methods aren’t quite able to.)

Nice to meet you 👋🏼

  • SIGGRAPH (Aug 6 - 10): NVIDIA Research has created a new 3D imaging system that uses generative AI to convert a conventional web camera stream into a light field video stream that works with Looking Glass holographic displays. This system is presented at SIGGRAPH 2023 as a 3D telepresence demo. Headed to SIGGRAPH? More info here.
Credit: (X/@luminohope)
  • Greenpoint Film Festival (Aug 2 - 6): We will be premiering Valholo’s Aphid: Through the Looking Glass at Greenpoint Film Festival this week alongside a Looking Glass 65” potentially with a new Liteform friend. Highly recommend stopping by if you happen to be in New York this week. Buy tickets here.

From the source (us)

  • We released Looking Glass WebXR 0.4.0 which improves automatic windowing on Chrome for MacOS & Windows and now allows for quilt screenshots.
  • Last Thursday, our friendly Community Manager, Arturo, joined the team at AWE Nite and in the span of one hour, sped run through some of our latest and greatest hits: Blocks and Liteforms. Missed it? You can watch the full recap of the show below.

That brings us to a wrap for this issue. We’re all looking forward to the many new advancements to be announced at SIGGRAPH and we’ll be sure to do a thorough overview all things that piqued our interest in the next one.

Loved this? Let the world know by sharing this with your network using the share buttons to the left or simply sign up for our newsletter if you haven’t already to get this in your inbox when we next publish.

As always, to the future!

-Nikki & the Looking Glass team

Stay engaged.

Sign up to receive our ongoing email updates covering announcements, case studies, product releases, and general hologram tips.