Evolution of live streaming technology is considered to be the culmination of a century-long quest. As the internet grows larger and transmission speeds get faster, live streaming technology continues to make huge progress. At the dawn of the internet, live streaming technology was mostly used for audio, and later on moved to video and television. Websites that streamed various TV programs and spans started to become popular as internet speeds improved towards the mid-2000s. The live streaming technology became more matured around the year 2010, when ultra-low latency streaming and high-efficiency video coding technology were introduced. The spread of huge advances in technology, not limited to modern servers and powerful personal computers, made high definition streaming more feasible these days. On-demand video streaming drastically changed the way of content delivery, converting from traditional TV-like scheduled programming to multiple on-demand options which are predominately chosen by consumers. As live streaming technology continues to evolve, now people are not only satisfied with the flat 2D images showed in the 1990s and 2000s, but they are expecting more high quality and immersive 3D contents. With the help of 2D to 3D conversion, different media industries such as movie, gaming and television, gain access to the third dimension and new dimensions to their contents. This could also help to push the general development of 3D technology. The research in 2D to 3D conversion also has been fueled by the increasing demand of 3D contents. For examples, master students are working on enhancing the quality of 2D to 3D conversion for famous 3D movies in modern movie industry.
Evolution of Live Streaming Technology
The evolution of live streaming technology can be categorized into three critical phases. The mid-1990s saw the inauguration of earliest live streaming video platforms, such as MSNBC, CBS Sports, and Progressive Networks. When Microsoft released its Windows Media Services 4.1 in 1999, the cellular technologies were able to provide great bandwidth and stability to permit video-based mobile phones and multimedia computers to task perfectly collectively. This made the extension of live streaming technology to wireless cellular networks workable. From 2005 onwards, modern digital camcorders, both novice and professional, had the ability to capture digital video that can be easily transferred from its original storage form to a computer through Firewire interfaces. At the same time, broadband Internet became more prevalent and accessible, thus making possible the extensive deployment of live streaming. The previous several years have marked a rapidly rising trend of live streaming data usage on various social media platforms, such as YouTube, Instagram, and Snapchat. This is due to the fact that all major social networks are investing increasing resources in the development of live streaming technology in order to stay on trend and be competitive. The emerging cloud-based live streaming platforms and their applications have been revolutionizing the way people think about live video streaming over the internet these days. For example, the field of telemedicine has seen a fundamental transformation because telemedicine providers can host virtual consultation visits with patients by live video streaming; content creators can reach a significantly wider audience. This is because digital content has grown in significance and now plays a vital role in connecting products to end users, allowing them to be experienced in a digital world. Such digital content can be anything from more traditional static images to 360-degree panoramas and even live-streamed video. In addition to the development of live streaming technology, there are fundamental improvements in 3D technology. 3D movies have gone past the stage of being a fad. The technological advancements make new 3D film experiences enjoyable and even allow the audience to view classics in a way they have never done before. The 3D technology has now arrived in our households and becomes much more than an occasional trip to the local cinema. With the modern society dominated by technology and all sorts of content readily available from the World Wide Web, manufacturers have been able to push the boundaries of 3D and release TVs that exhibit cutting-edge smart technology and innovation. These advancements reflect how 3D technology has become increasingly social and accessible over recent years. Applications like 3D scanning have brought virtual 3D reality home, and it is meaningful not only in the realm of personal entertainment but also in industries from transportation to healthcare are leveraging the power of 3D. This trend toward a more open and easier to access 3D environment will likely continue to grow as live streaming technology advances in the future. The boundaries and direct impacts of the evolution of live streaming technology and the field of 3D technology are discussed in the next section.
Rise of 2D Live to 3D Conversion
The idea of transforming 2D films into 3D films was not a new concept. In the early days of films, there were a few attempts to make 3D films out of 2D films. However, those attempts were very expensive and the technologies were not advanced enough to pursue it at the practical level. The significant amount of work that had to be done in order to make the 2D films roll and the 3D glasses was an additional cost to produce a 3D film from a 2D film. However, the rise of 3D technology made this concept viable and practical. With the development of high-resolution computer graphics, any scenes for the 3D film could be created from 2D data without the need to get the original camera setup for that particular 2D shots. This idea has made the 2D to 3D conversion for 2D films feasible. Besides that, with the advance in 3D display technology, nowadays there are a number of well-known movies that actually use this kind of 2D to 3D film conversion to produce a 3D effect. In this kind of 3D film manipulation process, although re-shooting in the 3D format is considered, the cost and the amount of work involved are still relatively high. Moreover, in most of the films, only a very small number of real 3D shots were being introduced into the film when the 2D to 3D conversion method was chosen. The 2D to 3D conversion process for films can be achieved by following some basic steps, which involve the selection of scenes and layers, rotoscoping, plate adjustment, 3D camera solving, and also the depth map generation. Step number one is to perform the separation of the foreground layers and the background layers for each and every selected scene. However, visual effect artists have to follow the edge of the character and objects, and then the rotoscoping is being taken place where the edge stays consistent and being picked up frame by frame. In other words, the acquisition of depth information from the 2D images at different views is necessary in the 2D to 3D conversion process. The term ‘solve’ refers to the computation of the camera renders, and the computer will automatically generate many viewpoints according to the calculated depth information. In the final step, the depth value of each pixel of the views can be determined from the generated depth map. The rendered images are then being displayed in the formats that can accommodate the use of anaglyph glasses and polarized glasses. By introducing the 2D to 3D conversion concept, film industries now can continue to make films in the 3D format without the need to develop 3D films from scratch. This is because nowadays well-known movies of 2D to 3D conversions are making a huge influence on the film market as well as giving viewers a new exciting 3D watching experience.
Advantages of Live Streaming in 3D
It is widely accepted that 3D adds an extrinsic quality to the viewing experience. The compatibility of live streaming in 3D with TV, PC, and mobile platforms allows a comprehensive viewer base to watch content in 3D. Besides the integration of 3D in movies and TV shows, 3D has also found its way into many other multimedia applications, such as gaming, medical imaging, and 3D printing. With the rise of 2D to 3D conversion in the modern era of live streaming, there are great promises in providing high-quality 3D content and the 3D fad shows no sign of going downhill. Viewer interest is especially attracted to 3D live streaming events and most of them would be on social media platforms as 3D technology and live broadcasting quality would allow audiences to engage in lively discussions and share the live stream video. Also, the possibility of integrating 3D with the current prevailing technology that also actively seeks for the next generation of immersive multimedia, such as virtual reality and augmented reality, serves as a big advantage of live streaming in 3D. There are broad prospects and incentives for the development and invention of new technology in the 3D multimedia field. With the continuous advances in 3D rendering techniques, 3D content creation, and improvement in streaming and Internet technology, 3D live streaming would soon become a prevalence. In the future, a higher level of artificial intelligence can be integrated in live streaming, computational power of the devices can be further exploited for providing better quality of 3D content in terms of higher definition and frame rate. Last but not least, the expansion of 3D content libraries is also on the go. Such expansion provides variety, abundance, and accessibility of 3D content for creators and researchers to new ideas, innovation, and realization. With innovations and new developments in research and technology made around the world, the marriage of 3D with live multimedia has no doubt revolutionized the way of how multimedia content can be generated and presented.
Enhanced Viewing Experience
The visual depth associated with 3D content provides a more exciting and enjoyable viewing experience for live streaming audiences. Whether it is 3D video games or 3D movies, the presentation of added depth to 3D content typically separates foreground and background features, that often leads to an increased level of engagement with the content. Consequently, this makes live streaming in 3D a viable option for a wide range of viewer interests beyond just movie streaming. This is possible for a large variety of content types such as sports, news, concerts and live video chats. For example, in live 3D sports broadcasting, 3D technology offers new and exciting ways to engage with the matches. Visual enhancements, such as side-by-side instant replays and showcasing player stat graphics in a 3D space, take on-field sports to a new level of depth never seen before. The general concept of 3D technology is to present each eye with a slightly offset 2D image, providing the illusion of depth and therefore adding to the 3D effect. However, the requirement of wearing specialized 3D glasses in order to experience such effect tends to be a common qualm among potential viewers – a notable disadvantage to 3D technology. On the other hand, live streaming in 3D allows users to enjoy the wonders of virtual three-dimensional space through a 2D screen without the need for complicated accessories such as the 3D glasses. Albeit the immersive experience of virtual reality setups, the practicality of 3D live streaming to provide an extra layer of visual depth to the content reinforces the enticing feature of variety and accessibility. With the enhancement that 3D presents and the advanced level of engagement it offers, it is clear that this form of live streaming is capable of providing an experience well beyond that of traditional 2D streaming. With the continuous advancements made in digital effects and the increasing commercial demand for 3D content, the future for 3D live streaming certainly looks promising. By embracing the possibilities that 3D conversion entails, it would undoubtedly revolutionize not just the practice of live streaming but our digital lives.
Real-time Interaction with Content
However, due to the increase in technology and the immense research being undertaken in real-time 2D to 3D conversion, it will not be long before the idea becomes a very real and powerful method of engaging with live streamed content.
In the case of a live 3D broadcast, if the content was being shown and produced in real time, viewers would be able to interact and manipulate the 3D environment straight away, as they are effectively watching the behind the scenes operation of the world being broadcasted live. This sort of live interactivity is unprecedented in 2D streaming, as the data of the viewer’s actions would have to be sent, processed and enacted by the broadcaster’s software in real time.
When combining these techniques with a live broadcast environment, this opens the door to several exciting possibilities for viewer interaction in live 3D streaming. Viewers could have full immersive control over how they want to watch live 3D content on their own screens, from changing their viewpoint and perspective to interacting with on-screen interactive elements in real-time – the same principle as modern video games, where the player’s interactions directly affect what is seen on the screen.
Using 3D reconstruction, a type of technology in the field of computer vision and mixed reality (where the digital and physical worlds are combined), the relative depth information of 2D points in 3D space can be calculated in real-time from 2D videos. This 3D data is used to create a 3D scene and allow for free viewpoint renderings – that is, a method used to generate images by virtually placing a camera anywhere within a 3D environment.
Now, imagine the same situation but in 3D – an extra dimension of depth and immersion, where broadcasters can stream in 3D and viewers can interact with the 3D environment as the content is being shown. Well, due to the recent advancements in 2D to 3D conversion technology over the past few years, this idea is now a reality.
When people talk about “live streaming”, they usually mean a 2D video being broadcasted on a screen. Viewers do not have any control over the content being shown, other than the ability to chat via a chatbox or use emojis – provided the content they are watching is being simultaneously shown to them. This is how most live streams on 2D platforms, such as YouTube, Twitch and Facebook, work.
Immersive Virtual Reality Possibilities
The conversion from 2D to 3D provides exciting possibilities for immersive virtual reality. In the world of entertainment, virtual reality is an area which is developing and becoming increasingly popular within the mainstream. 3D films have already provided an amount of immersion to the viewer and virtual reality takes that immersion to the next level. When a user puts on a virtual reality headset and experiences a film or game, every single position of the user’s headset will be tracked and the viewers will be able to look around or move in the 3D world. With the availability of 3D content created from 2D to 3D conversion, potential uses for virtual reality open up exponentially. Imagine that, while watching a football match in virtual reality, instead of moving the mouse to change the camera angle, a user can turn his headset and look around the stadium, being able to see from almost any angle that the live feed allows. Also, there are no boundaries if the content is streamed onto the virtual reality world – the screen size doesn’t have to be fixed. Whether you want to watch a film on a massive virtual reality screen or a 3D game on a floating screen, with the right software, the possibility is just a few lines of code away. Even better, virtual reality could lead to a higher level of control and customization over the viewing experience. By having 3D content, the depth can be used to create multiple ‘layers’ of the viewing experience and virtual reality controls can be employed to move the layers or to provide different options. For example, if the user is watching a 3D film, the ‘main action’ might be on a middle depth and a ‘background’ layer can show some backstory while a ‘far’ layer can have some subtle jokes or references. Control can be given to the user to decide which layer – which story, to look at. By integrating virtual reality and 3D converted content in the current 2D live streaming shows, a ‘new age’ of modern age entertainment can be brought to the audience. The audience can ‘dive into’ a virtual world and yet the experience is still in sharing the same moment as the characters do in the real world. Every movement of the head or the shouts from the sofa are all part of the shared experience. With the ability for social media integration and real-time feedback through the use of mobile phones, this could even be the next ‘big thing’ within the world of performance arts and live streaming.
Challenges and Limitations
Device compatibility has been a big issue in the 3D industry ever since the popularization of 3D contents. In the early days, the incompatibility between different 3D display technologies, such as the active shutter glass and the passive polarized glass, had been the nightmare of content makers. With the shift towards virtual reality and augmented reality in the recent years, the focus has been realigned to ensure the compatibility between 3D contents and the existing and prospective virtual technology devices. The array of devices and the lack of a ‘standard’ 3D streaming method also pose great challenges to the live streaming industry. Although the current mainstream gaming consoles and personal computers support live 3D contents, the associated cost of creating a standard 3D live streaming setup is deemed to be impractical for most ordinary users. Also, the flexibility and mobility of virtual reality headsets and their capability in delivering immersive 3D experience make it desirable for live 3D streaming but inevitably increase the technical threshold for ensuring a smooth broadcast. These limitations not only inhibit developers from creating a seamless live 3D experience for general users, but also limit the feasible research and development in this emergent field of live 3D technology.
Streaming 3D media in live platforms is often inhibited by the need to support multiple types of devices and a wide range of viewing quality based on the available bandwidth. Many top online platforms today, such as YouTube and Twitch, only support the playback of pre-existing 3D videos in a few specific devices and browsers using their built-in 3D video player. Up until now, the only realistic way for ordinary users to share their live 3D content is to use side by side 3D output, in which the output frame has the left and right eye images of the same virtual cameras placed side by side within a single frame, and to rely on the possibility that viewers who own compatible displays or 3D glasses will be able to merge the separate pictures and experience the depth effect. The absence of a feasible 3D live sharing platform effectively defeats the purpose of increased interaction and the exploitation of the full potential of live 3D media.
One of the main challenges in converting 2D video content into 3D is rectifying the differences in depth perception between 2D input and real life. The size and position of objects in 2D do not change with perspective in the same way as objects do in 3D. Objects might overlap or obscure each other on different depth planes in 3D, but might be completely non-overlapping and non-obscuring in 2D. Due to this, the software must create an artificial conversion by either making some objects transparent so that they would fit within the specified depth range and not obscure other objects, or altering the size or shape of some objects so that they do overlap or obscure other objects in the specified depth range. This process requires extensive manual intervention and expertise in graphic designing. The operations performed on different components must be consistent with the visual effects for the entire scene in order to create a comfortable and convincing 3D experience.
The advancements in 3D technology and live streaming have created a wide array of possibilities for 2D to 3D conversion. However, the process is not without its challenges and limitations. This section will discuss the most pertinent and pressing difficulties for the industry, such as technical constraints, bandwidth and streaming quality considerations, and accessibility and device compatibility issues.
Technical Constraints of 2D to 3D Conversion
Given that 2D images only have a flat X and Y data, but 3D images have both X, Y and Z depth (“Basically there is no Z depth information in 2D images”), this means that the computer has to guess how far away each part of the 2D image is from the viewer’s eye. Technology that does this is called “2D to 3D conversion” and takes a 2D image and makes it become three-dimensional. We use 2D to 3D conversion process to convert the 2D views from each eye in every single frame, while in parallel, computes the depth information. The key disadvantage of 2D to 3D conversion is that often the resulting 3D image can lack “depth quality” – that is rather than demonstrating pictures that have elements that appear close or far away from us, the observer sometimes sees all objects in the 3D picture at the same depth. Also, the calculations that the computer makes to convert the images are based on some general assumptions – for example, it expects objects to get smaller the further away they are – but not all images follow these rules, meaning that some areas of the 3D images will be distorted as the computer does not know which rules to apply. Also, margin of error is permitted in the depth map on the particular kind of technology that a game is using and inevitably this means that sometimes the real “depth” of a part of picture does not quite match the computer’s calculated depth map. And because of these errors, we are forced to all objects at the same “effective” depth, causing them to overlap and perhaps even creating headaches or eye strains for the observer. So in reality, with 2D to 3D conversion, the photographs can look good and offer a good 3D effect but those extra dimensions are being created by the computer and so they will never be as good as pictures styled and taken with 3D cameras. This will become increasingly evident as industry standards for producing, storing and displaying 3D images and video becomes more widely utilised and the use of 3D cameras becomes more widespread, with their inherent ability to generate genuine stereoscopic media.
Bandwidth and Streaming Quality Considerations
Also, the aim to render to multi-view format in parallel would provide viewers a sense of freedom to choose their own viewing angles. Which is said to be the next big thing in 3D technology, “3D multi-view video rendering”.
Offline conversion can offer the opportunity to process a great amount of data to get better quality and relies on many new computational photography and computer graphic techniques advancements. However, the aim to streaming process the final result is to let the user immerse themselves interactively into the 3D experience and give users advantages through the immersive live contents. Such as live 3D sport events and live 3D interactive telecommunication.
At the moment, worldwide best practice is to use real-time several (more than two) stages of massive digital data processing in 3D to 3D conversion. I mean, converting 3D film into a different type of 3D display format. For example, from anaglyph which is now widely used in home 3D TV as well, to autostereoscopic. And this is not a streaming solution.
There are several international research projects such as “3D content creation chains” or “stereoscopic 3D gaming study” that focus on finding the best strategy for hyper-stereoscopic (beyond human vision) image and video compression, to reduce the very large load of digital storage and networks.
In the cinema and media industries, 3D live streaming contents need to offer multiview or volumetric video that is higher than HD (high definition, which typically means 1920×1080 pixels) in resolution, dependent on technology. It’s also needed to provide contents in progressive frames and with high frame rates. For example, 60fps (frame per second, which means the played frames per second) or 120fps. These are all essential for immersive experience in 3D media contents.
Accessibility and Device Compatibility
The third major limitation and also a very crucial problem is accessibility. In most cases, users do not want to watch live streams on their PC or laptop monitor. They prefer large screen displays, but for 3D media contents, they also need 3D glasses. Now plenty of 3D devices and gadgets have arrived in the market. However, every device has its own version of glasses and different protocols, so it is very difficult to support every device. The other problem is that multiple different 3D displays are in common use, ranging from various technologies like autostereoscopic, polarized, and anaglyphic ones, with different screen resolutions and different dimensional aspect ratios. Also, for example, most of 3D projectors work with “checkerboard” stereoscopic 3D, which is not directly compatible with any of the 3D PC games. Viewer’s distance is also an important issue in the technology of preparing 3D media content, as the (stereoscopic) depth has to be altered according to the distance, otherwise the viewer could perceive a “cardboard effect” – which was the main reason for the collapse of the 3D media industry in the 1900s with its primitive technology. But the current problem is that on some 3D devices, these types of effects could be seen due to a lack of proper synchronization between the streaming device and displaying device, which could also cause a dramatic limitation for the usage of these devices for 3D live media streams.
Future Prospects and Innovations
Finally, although the popularity of 3D films and televisions has waned in the past few years, a new frontier for 3D content is now open in the form of virtual reality (VR). VR refers to the use of computer technology to create a simulated environment that users can explore and interact with. Typically, VR technology requires the user to wear a headset and interact with the display in an immersive 3D world. Through the expansion of 3D content libraries and the possibilities of real-time interactions in 3D environments embedded in live streaming, it is likely that VR will become a major driving force for 3D content in the future.
As live streaming services become more popular, there will be an increased focus and research into making 3D live streaming a reality. In particular, a seamless – delay-free – method of live 3D content conversion and transmission will be a long-term objective. As mentioned in section 3.2, algorithms to automate the 3D conversion process are still not perfect and often require manual intervention to fix certain effects. There is a potential to develop and deploy artificial intelligence technology as part of the 3D conversion process. By using algorithms and machine learning, AI could be trained to recognize different types of 2D images and more accurately predict depth maps. This could help to reduce the amount of manual work required in the 3D conversion process and standardize the output quality.
The future of 2D to 3D conversion and live streaming technologies looks promising. The demand for 3D content is likely to increase steadily, as there is a rising popularity of 3D films, 3D televisions, and immersive experiences like virtual reality. At the same time, media equipment and technology are continuously advancing. For instance, screen resolutions and refresh rates are increasing, and new devices like autostereoscopic displays – which do not require special glasses to view 3D images – are becoming more widely available. Such developments in the display technology would improve the quality of 3D streaming and reduce the need for 3D glasses.
Advancements in 3D Rendering Techniques
With increasingly powerful consumer hardware and the advent of dedicated ray tracing chips in modern graphics cards, it seems likely that this technique will begin to see more and more use in the field of real-time rendering in the very near future. By leveraging the success of these new technologies and continuing to explore further innovations in 3D rendering, we may soon see a new era of high-quality, real-time 3D content on live streaming platforms. Such advancements would help to provide a much more immersive and visually impressive experience for 3D content viewers and open up the potential for an even wider range of applications in today’s modern 3D industry.
However, recent improvements in what is known as “ray tracing” – a rendering technique that better mimics the way light behaves in the real world – have started to change this. Ray tracing works by simulating the path of individual light rays through a 3D scene and using their interactions with objects and materials to calculate the final color of a given pixel. Although this process is vastly more computationally intensive than simple rasterization, the resulting images are noticeably more realistic and capable of producing visually stunning effects.
For one, many complex lighting effects and surface properties are simply not possible to achieve without greatly slowing down the rendering process. Additionally, because rasterization requires the computer to repeatedly perform complex calculations for every model in the scene, it still requires very powerful hardware to achieve an acceptable frame rate. This means that current 3D technology does not easily allow for sophisticated, high-quality effects to be used in a real-time streaming environment – where rendering must be performed at a speed that can keep up with the demands of live video.
Traditionally, 3D rendering has used a process called “rasterization”. This involves the computer taking each 3D model, breaking it down into individual 2D shapes, and working out which pixels on the screen need to be colored in. However, in order for rasterization to run quickly enough for use in real-time applications (like video games), certain sacrifices have to be made.
One of the most promising areas for the future of 3D content creation lies in the continued development of 3D rendering. In computer graphics, rendering is the process of generating a two-dimensional image from a 3D model. This task is incredibly complex from a computational standpoint, but substantial progress has been made in recent years in the form of faster and more efficient rendering methods.
Integration of Artificial Intelligence in Live Streaming
In the past decade, artificial intelligence has gained significant attention in the area of computer graphics, especially in 3D modeling, rendering, animation, and interactive systems. However, the potential impact of merging artificial intelligence and live streaming together in the context of 2D to 3D conversions has not been explored in depth. Leading experts believe that integrating artificial intelligence technologies into live streaming platforms will not only solve some of the current limitations of 2D to 3D conversion, but also open up new possibilities that were not feasible by just human interventions. For example, given a live 2D video stream, the artificial intelligence algorithms can predict the depth in an intelligent way and render a stereo pair of images in real time. Because current technologies and methodologies for 2D to 3D conversion are quite labor-intensive and often depend on individual’s professional skill, the fusion of artificial intelligence with live streaming will ultimately bring about great simplification in the conversion process. Moreover, current 2D to 3D conversions can only handle simple types of animations because there are so many mistakes made during the conversion and intensive manual work is needed to fix them. However, intelligent processes driven by artificial intelligence will automatically correct mistakes and produce continuous 3D output where humans would do it over time to make it work. On the other hand, live streaming is just a remote form of traditional media, and its recent popularity is simply because users nowadays prefer an easier life by shifting recording date and time to others. We are currently using live streams continuously and the data is just thrown away after encoding, so there is almost no interaction in between apart from the conversion. If artificial intelligence is employed in live streaming platforms, the digital world and real world can be bridged because it will generate interaction immediately when a 2D video stream is detected. While opening doors for new ideas and possibilities, researchers are also facing a number of technical barriers that have to be cleared for this integration. The computing power is a key element because for good artificial intelligence, it requires high-speed processes to perform iterative algorithms. Also, high speed and reliable internet connection is vital for live streaming; otherwise, video feed might have lagged and hampered the intelligent decision-making process. Last but not least, research has to be done in order to find and develop the most suitable artificial intelligence algorithms for the live streaming real-time 2D to 3D conversion. It is likely that a series of algorithms are used and the selection of suitable algorithms lies on what kind of animations are going to be converted. Other factors should also be taken into account, such as the testing results in different platforms and the complexity of the final output.
Expansion of 3D Content Libraries
In recent years, a considerable increase has been observed in the amount of stereoscopic 3D programming that is being generated. This is happening due to the emergence of new 3D TV channels and growing consumer interest in 3D TV and cinema. As a result of this developing trend in 3D, the demand for 3D content is on the rise. The need for expanding the 3D content libraries is important in light of the fact that creating 3D content is not only more time consuming but also costlier than creating a 2D equivalent. One way of addressing this need is to convert 2D video content into 3D. Although some of this work can be done automatically using a range of commercially available software, it often requires operators to define the depths within the images by selecting different areas of the image and setting the depths at different planes. However, converting 2D video content into 3D in this way is technically challenging and there are many cases in which this is not likely to deliver a good result. For instance, using this process to convert older 2D video games, in which the graphics for the various depths within the scene are not available separately, is probably unrealistic. Also, when the source material includes rapid shifts in the camera angles, something that often occurs with music videos and action-based content, it is likely that sections of the resulting stereoscopic 3D video may cause discomfort to viewers. As technology develops and the conversion processes improve, the use of 2D to 3D conversion as a method for expanding the 3D content libraries would become more widespread. This is an area of active research and it can be anticipated that we will see some significant improvements in the near future.