3 Tech Innovations That Will Change the Way Consumers Shop for Home and Beyond
In early 2023, there’s no shortage of speculation on how the next wave of computer innovation might capture and enliven our attention and imaginations. From blockchain and crypto, NFTs, quantum computing, advancements in artificial intelligence (AI), to augmented reality/virtual reality and the metaverse, there are a number of terms that promise to herald a new era of technology that may change our lives just as the mobile phone did so many years ago. Many of these technological terms will not cross the significant barrier that separates jargon from reality, while others will go on to influence how we live our day-to-day lives.
Below are three technologies poised to transform and evolve how we shop for the home.
Recent advances in AI allow people to create art from strings of text (e.g., a daikon radish in a tutu walking a dog, or an armchair in the shape of an avocado). Be it DALL-E from Open AI, Make a scene from Meta, or SR3 and CDM from Google, advances in diffusion models, which power generative AI, allow people to produce image and video artwork from text strings, create digital avatars from photos, and generate high-resolution images from low-resolution inputs.
Diffusion models are a great example of how the scientific community is constantly experimenting with innovative approaches to further the state of the art in AI. Diffusion models first add noise repeatedly to corrupt the image before training a neural network to reverse this corruption process.
Real-world applications of generative algorithms offer many possibilities in the home; customers can reimagine what their space would look like by simply uploading an image of a room or area (e.g., a living room) along with inputs on what style they prefer. For example, they might want to transform their living room into a more modern space with a Scandinavian aesthetic or a French country feel. With generative AI, individuals have the power to dream up an entirely new space quickly and easily, without changing any of the core architecture. While these spaces are completely digital, they can serve as inspiration to help customers figure out what they’re looking for and eventually turn this inspiration into a shoppable reality. Imagine being able to create and view multiple versions of a new kitchen in seconds. And once a kitchen layout is selected, being able to purchase items in that kitchen with just one click.
Democratization of 3D Content Creation
When looking at web pages online today, users typically browse information in a two-dimensional form factor, be it a laptop, phone, tablet or television set. However, advances in 3D image generation will soon allow that same user to experience visual information — e.g., products, people, places, videos and images — in a much richer way.
One of the historical challenges hindering 3D adoption online has been creating and sharing 3D images at scale. Given the niche skill sets, tools and software necessary to create and share said 3D images, the average person, lacking access to these assets, wouldn’t even know where to start.
However, recent advances in technologies like neural radiance fields (NeRF) are poised to change that. A NeRF is a fully connected neural network that can generate complex 3D scenes by working off an input of two-dimensional images. At a very high level, NeRFs allow people to upload a couple of pictures of any item from their phone that the algorithm uses to generate a realistic 3D representation. It’s one thing to see a photograph of a bouquet — it's far more engaging to see those flowers flutter and come alive in a breeze in 3D.
When businesses can create compelling 3D representations of their offerings, they can do more than just create these images. NeRF technology allows for showcasing offerings in hyper-realistic ways, enabling customers to easily imagine how their products would appear in their own homes and spaces. Like diffusion models, innovations in areas like NeRFs will be a boon for the creator economy.
With innovations like diffusion models and NeRFs posed to revolutionize the creation of rich visuals and 3D representations, advances in physical displays will bring these images to life in the digital and real worlds.
Many of us have experienced our favorite Hollywood movie in 3D. However, the thrill of watching asteroids leap out of the screen towards your seat requires special 3D glasses. Today, a new wave of three dimensional and holographic displays are being introduced into the consumer market on phone, tablet and laptop screens that allow users to view different stereoscopic perspectives of content without the need for special glasses or a headset.
To give just a few examples, the Lume Pad, an Android tablet that launched in 2020, features a three-dimensional mode that users can switch to seamlessly. The Looking Glass Factory display works by simultaneously displaying 45 to 100 different views of the same scene, each offset slightly, thereby creating the 3D effect. Dimenco has similar technology that uses eye-tracking to achieve higher resolutions for its single-user display.
As these displays become more and more ubiquitous, everything people view on screens — store displays, movies, phones, photos — could be viewed in 3D for a much more immersive and engaging experience.
Innovations like diffusion models, NeRFs and next-generation displays are poised to transform the ways we shop for products online. In the home alone, the possibilities for how we all shop for and experience our physical spaces in a digital world are motivated by a vision where technology can do much more than just play out in a virtual world. We're excited to see how upcoming innovations to blend digital bits with physical atoms reimagine the physical spaces that are close to all our hearts — the spaces that we call home.
Shrenik Sadalgi is the director of R&D at Wayfair. He leads a team of creative technologists that envision and build the future of retail and home.
Related story: Why Wayfair is Collaborating With Competitors to Advance 3D Technology
Shrenik is an ideator, a futurist, and a technologist, and he leads the exploration of far-future R&D for Wayfair where he’s pioneering the future of retail and home by leveraging emerging technologies.
Shrenik also serves as Chairman of the Khronos 3D Commerce Working Group, a group of leading retail and technology companies exploring the opportunity to accelerate the adoption of 3D experiences by establishing a set of universal standards for platform-agnostic 3D model creation and distribution. Shrenik holds a master’s in computer science from Columbia University where he was a recipient of the MSTA fellowship.