Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

A brief History of figures in Graphics: the graphic Research past of two Turing Award and Oscar winners

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

From August 8 to 11, the International Top Conference on computer Graphics, SIGGRAPH, was held in Vancouver, Canada. The 2019 Turing Award and multiple Academy Awards "double material winners" Pat Hanrahan and Ed Catmull were unveiled at the conference and gave a report entitled "Shading Languages and the Emergence of Programmable Graphics Systems" (the emergence of shading languages and programmable graphics systems).

Pat Hanrahan, one of the founders of Pixar Animation Studios, is currently a professor at Stanford University's computer graphics lab. He led the design of the RenderMan interface specification and RenderMan shading language at Pixar Animation Studios, and was involved in the production of Pixar classics such as Toy Story. He has won three Academy Awards for his outstanding work in rendering.

Edwin Catmull, a famous computer scientist, co-founder and former president of Pixar Animation Studios, co-founded the computer Graphics Lab of the New York Institute of Technology (NYIT), which is known as the birthplace of modern visual effects. He combined his love of animation with computer graphics and won nine Oscars by relying on his own technology.

In 2019, two founders of computer graphics, Hanrahan and Catmull, were jointly awarded the Turing Award in recognition of their contributions to 3D computer graphics and their revolutionary impact on applications such as filmmaking and computer generated graphics (CGI).

In this presentation, Hanrahan demonstrates the development of rendering systems, shading languages, GPU, etc., through his graphics research experience at Pixar and Stanford since the 1980s. He believes that the development of graphics has brought about a golden age of computer architecture, allowing us to build different types of computers to optimize different tasks.

Edwin Catmull tells the story of getting started in computer graphics, looking for a job for the development of graphics, and then founding the computer graphics division of Lucas Pictures, which was the predecessor of Pixar. During that time, Catmull recalls, it was an exciting time when he and the creators around him broke through restrictions in their respective fields and turned the impossible into reality.

Pat Hanrahan

It is a great honor to win the Turing Award. The first person to win the Turing Award in the field of computer graphics was Ivan Sutherland, the father of computer graphics (1988). Edwin is a student of Ivan, and I work at Pixar Animation Studios, so this is a common honor for two generations of researchers. A whole generation of computer graphics scholars before me laid the foundation for this field and gave me a lot of inspiration in my early career.

Ivan Sutherland, winner of ▲ 1988 Turing Award

I have been praised for my computer image generation work in the film industry, and I will start with this aspect first. The famous virtual image below was created by George Lucas's film company Industrial Light and Magic ("Industrial Light and Magic" for short), called "The Road to Point Reyes". Point Reyes, a seaside in Northern California, created this image to convince Lucas that computer graphics can paint a realistic picture of the imaginary world, create the diversity we see in the world, and show convincing detail and complexity.

▲ The Road to Point Reyes

After being awarded the Turing Award, I realized that the larger computer science community actually did not know much about computer graphics or the knowledge needed to make movies. They thought it was amazing, it was amazing, and I had to explain to people what computer graphics is and what graphics knowledge is needed to make movies. We need to develop models and algorithms to create pictures of everything around us, and the entire SIGGRAPH circle is working together to do this. Over the years, we have figured out how to make pictures of people, how to render people, places, things, teapots, rabbits, mountains, legs, streams, clouds, rainbows, halos, fabrics. I also study hair and skin. It's not just Edwin and I, but the entire graphics circle has come up with hundreds of cool ideas that make movies possible.

This photo is just an early example of how graphics can be used to create virtual images. When we were at Lucasfilm, Lucas mainly did special effects for his films, and the goal was to create computer graphics that could be seamlessly integrated into real life.

The following is an early application: the 1985 Industrial Light and Magic film "Young Sherlock Holmes" ("Young Holmes"). You can see the stained glass characters in the church.

▲ movie "Young Sherlock Holmes"

The early goal of computer graphics is to make realistic images, which is our first goal.

Rendering system

We had a rendering system called Reyes at that time. Reyes is an interesting abbreviation for "render everything you ever saw" (rendering everything you have seen). Our goal is to simulate the diversity of the visual world, what the eyes can see, we have to be able to model it. We want to achieve completely different levels of geometric and visual complexity; we want high-quality images without artifacts that can be combined with reality; and we want this rendering system to work efficiently in hardware all the time.

You should have read this great paper on Reyes's image rendering architecture co-authored by Robert Cook, Loren Carpenter and Ed Catmull. By the time I arrived at my destination, they had completed all these amazing innovations.

When I arrived at Pixar in 1986, we had a hardware goal. We are going to render a scene with 80 million micropolygons, and at that time, if you buy a hardware, it may only be able to render 40000 polygons. The total amount of calculations we need is far more than we can do with existing machines. So we can't do this unless we build hardware to speed up the process.

Take the movie Monsters University (Monster University) as an example, it takes 100 million CPU hours to calculate, which is about 1 million times higher than our goal. For 30 years, from 1983 to 2013, computing power increased about tenfold every five years under Moore's Law. All of this is achieved through the improvement of computing power.

My job is to deal with shading languages in rendering quality. The picture below is me in 1983, when I was a very happy graduate student. About a year after I discovered computer graphics, I spent a year teaching myself how to program in C and unix. I didn't know how to program before. I decided to learn programming because I wanted to learn computer graphics and create something. I try to implement every paper, and there is a STC graphics terminal in the lab where I work. I sit there all day writing software to implement various algorithms.

▲ Pat Hanrahan in 1983

This is the first paper I have written. The title is "Procedures for Parallel Array Processing on a Pipelined Display Terminal". In fact, my mentor, Lenn Yore, was obsessed with neural networks at the time, focusing on computer vision and hardware. These ideas of building hardware and languages for hardware have been in my life since I started working in graphics.

After I joined Pixar, I read two great papers, one of which was "Shade Trees" published by Robert Cook in 1984 as part of the Render Advance System project at that time, and the other was "An Image Synthesizer" published by Ken Perlin in 1985. Their idea is that if you have a rendering system, you should build a language for it or some way to extend it.

This is the language I wrote at that time. No one but me likes this picture of a corroded teapot, which I have been using as a test example.

The bumps on the teapot are caused by the noise function proposed by Ken Perlin. You only need to add six fractal noise (fractal noise) to create this random bump texture, and then use it to disturb the indication. Finally, calculate the surface normal (surface normals) and color things. This is a typical RenderMan shader. So in fact, my contribution is to build a complete language based on what Robert does and figure out how to implement it effectively in software.

This is the early stage when we did this in the 1980s. To be honest, my main reason for doing this research is laziness. My users are very demanding, they have 1 million ideas about the use of the rendering system, and they want me to do too many things. I was mainly doing Reyes, designing the language and telling them to do everything they wanted me to do, so it was largely a stopgap measure.

At about the same time, graphics processing units (graphics processing units,GPU) and graphics workstations (graphics workstations) appeared. After graduating with a master's degree in 1982, kurt Akeley went to Silicon Valley Graphics Company (Silicon Graphics,SGI) to work with Jim Clark. One of his favorite papers is "RealityEngine Graphics". In 1984, all workstations could do was draw wireframes; in 1988, shadow polygons could be drawn; and in 1992, you could get complete texture mapping (texture mapping). It took eight years. Moore's Law works at the same time, but the technology was not invented overnight.

OpenGL architecture

When I worked at Stanford in 1995, there was a GPU craze. To some extent, Nvidia named the term GPU (Graphics Processing Unit) in 1999. They used the name GPU for the first time to implement a complete graphics pipeline on a single chip. Before that, the chip could rasterization, draw triangles and lines, but could not convert light.

This GPU is made of 17 million transistors, and there are actually thousands of times as many transistors in GPU today, but at the time, getting the whole figure was a considerable engineering miracle. The previous graphics pipeline, such as a SGI machine, was made up of multiple chips, but this GPU implemented it all on a single chip, which is a real breakthrough.

I think one of the most important tasks of Kurt is the development of the OpenGL architecture, which is just a simplified view schematically drawn. At that time, Pixar and SGI decided to jointly develop a 3D interface. Kurt represents SGI, and I represent Pixar, and we have meetings together every week.

In the end, we parted ways, and we fully respected each other's actions. I remember he explained to me one day that developing a graphics library for a workstation or interactive computer is very different from developing a graphics library for a rendering system. In fact, I think OpenGL and RenderMan are more influential, because the former can run on virtually every computer we have.

The really interesting and important thing about OpenGL is that it is an architecture, and such an architectural specification means that it is independent of any particular implementation. We all know that Fred brooks, the father of IBM 360systems, won the 1999 Turing Award, not for his work in computer graphics, but for his work on computer architecture. The architectural specification simply provides a blueprint for the way it is built, with which you can implement it in many different ways.

This architecture is actually very similar to the design of the CPU instruction architecture. So, in a sense, it lays the foundation for graphics chips. Over the years, there has been a lot of progress, such as DX 9 and DX 10, that have exceeded what Kurt did in the first place.

After I left Pixar, I continued to do both language and architecture. I wrote a fund application in 1995. 'i want to do more than we can do now,'I wrote. 'what's next is obvious, which is ray tracing, global illumination. It is not difficult to make such a prediction, because we can imagine the development of Moore's Law and apply the method we have come up with to the hardware.

Real-time programmable coloring language

After that, people tried to propose a real-time programmable coloring language. A team led by Henry Fuchs and others at the University of North Carolina at Chapel Hill developed the Pixel Planes and Pixel Flow architecture. One of the earliest real-time shading languages was developed by Mark Olano and Anselmo Lastra in 1998, and a system developed by Mark Peercy in 2001. We developed a shading language called "Real-time Shading Language" in 2001. Then in 2003, Bill mark and Kurt Akeley and others who worked at Nvidia did CG. Later, HLSL / GLSL were also created at that time.

I want to emphasize that it is not easy to adopt the technology we developed in the software and build and implement the corresponding hardware. This gave rise to a very important method that we still use today, namely Multipass Algorithms. You can use the rendering system to run once, run again, accumulate images in the process, and then run again, you can clearly enhance the image, add details, add shadows. This is an example of using a graphics system to run six different channels to make bowling bottles. This is the method proposed by Mark Peercy and Mark Olano, Airey and Ungar in the paper "Interactive Muti-Pass Programmable Shading". You can think of it as a frame buffer (framebuffer), just like a register or accumulator.

You just need to run some operations on it, and then add something, such as C for the color from the triangle you are rendering. T stands for texture. Repeat this process over and over again, it looks like you're running a program, and you're just executing these instructions, creating a framebuffer, and figuring out what you want. This is a very attractive idea and can be used to fully implement a shading language.

At the same time, all graphics vendors are starting to come up with a shader program (Shader Programs) approach. Unlike Multipass Algorithms, which doesn't add something to the framebuffer like running a very simple instruction, it has a complete running program, maybe a Mini Program with 128 instructions, but it takes input from the rasterization phase, runs the program on it, and then stores and outputs.

It turns out that this is a very important insight. You can understand it this way, if you use Multipass, it's like you're doing a simple vector operation; if you use a shader program, you're doing very complex operations on input. The advantage of the latter is that you need to do more arithmetic than the amount of bandwidth you spend. You know, memory bandwidth is always a limiting factor. It turns out that this is a very important innovation, and it is very important for developing programs. More graphics systems use this shader program method, which we call "arithmetic intensity" (arithmetic strength), that is, you have to do a lot of calculations.

The last piece of the puzzle is GPUPU. GPUPU is not a new idea, and the research in this area can be traced back to the beginning of the computing era, when I was a graduate student. People have been building parallel computers for many years and have implemented such a simple data parallel programming model. I have programmed machines that use this parallel program. All the problems about this parallel program have been solved.

In this parallel program, the initial part is a map (mapping), which applies a function to a set, just as you run shader programming on all the fragments generated by triangles, you can apply a function to the collection of fragments. In addition, there is a filter (filter), and if you have a bunch of things, you can delete some of them. Then there is gather, which allocates a whole set of addresses to memory and then collects them all.

But there are two other things that GPU is not good at, namely scatter and reduce, which refers to writing something to a random location, dispersing all the memory and shrinking, which is similar to adding a vector. These two things are quite simple, we can do these things with a little adjustment of GPU, and then implement a general-purpose parallel computer.

This led to the advent of the Brooke system, which was launched in 2004 by my student Ian Buck, who later went to Nvidia as chief architect of CUDA. It's really a simple idea: turn GPU into a data parallel virtual machine that you can use even if you're not a graphics programmer. Previously, when people tried to run different algorithms on GPUs, they had to be a graphics programmer. To run a running program, you had to render triangles and learn how to use things like OpenGL or DX.

This may be the last step, and we always need cycles very much. We need to build parallel computers, and after a few years, gradually turn them into general-purpose computers.

The other two aspects that I think are very important are domain-specific languages (domain-specific languages). We can think of OpenGL as a library, like a simple OpenGL program shown in the following figure.

But we can also think of OpenGL as a language with some kind of grammar. Here I wrote a grammar about OpenGL. Even if it is just a library, it is very similar to a small language embedded in C. If you don't follow this grammar, it will give you an error report and even give you a blue screen. So OpenGL is actually an embedded, domain-specific language.

What does that mean? I teach graphics, and I can teach people to use OpenGL in a week or two. So it's very easy to use, you don't need to know anything about Nvidia hardware, and it's super portable, it runs on everyone's GPU, and it's very fast, and the rendering is incredibly fast.

The change brought about by using OpenGL as a programming graphics language is an amazing innovation that we encourage in this area. It allowed ATI, Nvidia and other companies at the time to explore completely different hardware implementations without changing their programming models. This is an advantage that people who build CPUs have never had, because they always program in C and assembly language, and it is impossible to change the schema without upsetting all programmers, because once the change tool is no longer effective. So, this is a great thing, and I think the current idea of introducing new architectures using domain-specific languages should be encouraged, and there are many systems that follow this path, such as Haylight.

Graphics brings the Golden Age of computer Architecture

Another very similar and important idea is domain-specific architecture (domain-specific architectures).

I am mainly engaged in hardware design, I am making my own chips, and I think this is a good time to build chips. Why would I be interested in this? We have all heard of the end of Moore's Law, which is like an existential threat to researchers in the field of graphics who rely on Moore's Law. If Moore's Law disappears, it means I'm retiring, and maybe it's time to retire.

You might think that the end of Moore's Law will bring the end of the world, but in the Turing Award speeches of Hennessy and Patterson in 2017, they actually thought it would be the golden age of computer architecture. Their basic argument is simple, that is, we used to have only one kind of computer, such as ARM computer or x 86 computer, but now we are building all kinds of specialized computers. It's like biology, imagine the Cambrian explosion, and we evolved from a small number of organisms to a planet full of life. We now have all kinds of interesting computing devices.

We all know Apple's M1 Max chip, which has codecs, compression chips, security chips, eight common cores, high-performance cores, two low-performance cores for I / O, and two core GPU. Note that GPU is larger than CPU and much larger from a computing power point of view. The first GPU has 17 million transistors, while the chip has 57 billion transistors.

So now people are using and building many different types of computers to optimize different tasks, which is what I call "domain-specific architecture".

Finally, I would like to say that graphics has really changed the way computer systems are built. At present, the highest-performing computer in the world is the GPUs, because we can make use of unlimited amounts of computing and computer graphics. Work on animation and reinforcement learning is just the beginning, and more cycles will be consumed in the future. This is not only the way we use domain-specific languages and architectures, but also the way other people build their systems, such as machine learning systems.

So, this is probably the most exciting moment in computing, and I hope more people will join the graphics community in the future.

Edwin Catmull

Conceiving the future of graphics

I'm glad to be able to participate in this activity. SIGGRAPH has been my home for more than 40 years. I have many memories and friends here. In my career, this field started slowly at first, but as it in turn changed other industries, we experienced an accelerating and radical change. I would like to talk about the strong impact of these changes on me personally.

When I was young, I wanted to be an animator, but frankly speaking, my ability was not strong enough. So I transferred to physics. When I was at the University of Utah, near graduation, I took a computer science course taught by Alan Kay. His course opened the door for me to enter the new world. So I went back to graduate school at the University of Utah to study computer science. The first lesson I had was taught by Ivan Sutherland. It can be said that I was very lucky. Alan Kay and Ivan Sutherland, two teachers, had a profound influence on me, and later they both won the Turing Award.

Alan Kay, winner of ▲ 2003 Turing Award

I mastered several basic rules a long time ago. The first rule comes from Alan. Alan Kay tells us a less intuitive idea: people should grow exponentially to understand the meaning of its growth, to see things beyond reality, and to design the future.

In 1969, I witnessed something that meant nothing to me. At an ACM conference at the time, Alan said in a speech that computers would get faster and smaller, and laptops would one day become a reality. You know, the computer at that time was still huge and needed to be placed on many racks. Alan released a slide showing what the computer might look like in the future. The computer in that picture looks very much like an HT laptop that appeared many years later. The "Future computer Model" is foldable, with an ACG picture displayed on the screen.

After his speech, the audience asked questions one after another, one of whom was the then chairman of ACM. He criticized Alan, saying that Alan should not have made such ridiculous predictions, and that it was ridiculous to put ACG pictures on the model screen. I don't understand why he was so angry at the time, but I know that even experienced people find it hard to think about the meaning of exponential growth, and it still is.

From then on, I told myself that I must not turn a blind eye to such changes. "thinking about change" has become a basic rule of mine and has continued throughout my later career.

In graduate school, Ivan taught me another set of rules. He had already laid a foundation in graphics at MIT, built the first virtual reality and augmented reality system at Harvard, and then established a computer graphics program with Dave Evans at the University of Utah. He described the vision of computer graphics and then established a step-by-step program that was ready to solve problems with that vision as the goal.

At first, computer images were very limited in terms of polygons. We can only handle one scan line at a time. The first step in the program invented by Ivan and the team is to develop algorithms to determine the polygons that are visible in the image. One of the people who created the visible polygon algorithm was named John Warnock, who later founded Adobe. Another student developed an algorithm that can be used to create hardware for real-time rendering of polygons.

Looking for vitality in animated movies

The next step was to make the object look smooth, but the outline was still polygonal, and I realized that I could combine my love of animation with the new field of computer graphics.

I did a class project to make a polygon model for my left hand. I like this project very much, and I want to add my own strength to the progress of computer graphics. So Ivan suggested that I think of a way to see how to bend the outline of a polygon. After a lot of thinking, I think this approach is obviously fundamentally flawed. What I need is a rendering patch, what I need is to render the curve directly, but this requires a lot more memory than the computer had available at the time. The only thing I can do is put the whole image and Z buffer in memory. Since the operation did not support paging at that time, I wrote a page to move image blocks into and out of memory. I even broke one of the disk files because it rattled all over the disk. These disks were very large at that time.

But I was inspired by Alan's idea that when we simulate the future, we should have the confidence that what I think will come true. Even if the development is slow. I now have a mathematically defined surface to use, so I can render B-spline patches to achieve texture mapping. These images are a big step forward, while other subsequent studies continue to build on this chain of development. The algorithm we developed is inspired and limited by our existing machines. This seems to be the saying of the old times, such as how much memory there is and how slow the machine is, but it does not really limit the development of a discipline. The same is true in the field of art. We know that what we can do is limited, and what we have to challenge is to go beyond the limit. When the challenge is successful, the original limit extends outward, and the challenge we face becomes to break through the next limit.

There was another project I was proud of when I was at the University of Utah. The university sponsored a seminar on surface mathematics, and I spent a lot of time thinking about curves. I know there will be problems with using a pre-prepared patch network. The topology of the grid is not suitable for natural objects such as human hands, so through reverse engineering, I transform the mathematics of B-splines into a set of geometric operations to solve this problem. These operations can be applied to unforeseen meshes as rules for recursive subdivision of meshes. I proved this with basic high school geometry and thought it was a good idea. I showed this idea to a professor who specializes in curve patching. He hardly read my 18-page handwritten certificate and just said, "Ed, what the heck is this?" When I was hurt, I put the idea aside. After a while, I showed the idea to Jim Clark. He realized the idea, and we wrote a paper about it. Years later, Tony DeRose took the idea to the next level. We opened it up again, and over time, the idea eventually became a surface patch that is mainly used in today's film industry.

As early as when I was in college, I believed that this idea could be applied. This is a goal that I can work for for a long time. I always supported this vision and tried to establish an exchange program between Disney and the university, so I went to Burbank. When I went to Disneyland, it was great to see the filmmakers who animated my childhood memories.

▲ is located at Disney Studios headquarters in Burbank

Unfortunately, Disney is not interested in exchange programs, they just want to recruit relevant talent to help design new projects in Florida, but I am not interested. Disney is already the only studio in the game industry that may be interested in computer graphics, and I find that they are not interested in it at all. So I think the best place to pursue my dream is in the university.

But there was a problem at that time, that is, computer graphics was considered to have nothing to do with computer science, only as an interesting marginal discipline. Few college courses are interested in computer graphics, and the only two universities that are interested, Cornell University and Ohio State University, do not even take computer graphics courses in their computer science departments.

When I was interviewing for a university position, I tried to explain to the interviewer the great potential of graphics in the future. But no one listened, so I couldn't find a job in college. At the end of 1974, I received a call from Dean NYIT. He doesn't know anything about technology, but he wants to make animated movies, and he believes that computer graphics has a bright future. This is good news for me. But the bad news is that he thinks computer scientists will replace artists. NYIT is willing to buy two full-color wearable buffers, one of which costs $130000. We are ready to get to work. Alvy Ray Smith was the second person to join the team after me. He had previously worked at the Xerox Palo Alto Research Center, but he was frustrated by the strange view of color there, so he left his original place of work.

After I started working, I wrote a 2D animation system in several 3D rendering systems. We have also gathered like-minded people from all over the world. I am a novice in management, so I want to replicate my experience at the University of Utah and bring people together by letting people share and support the same vision, the same culture, which will be a long step-by-step process. I don't think our research progress should be kept secret, so I think the best way is to join SIGGRAPH, recruit people who are smarter than me, and publish everything we do. It turned out to be one of the best decisions I've ever made.

After five years of working at NYIT, we realized that the biggest weakness of the team was the lack of film creators. Even if we have created a useful tool, we cannot succeed without people who can use it. So we began to visit various film studios to show them our work, but each other was not interested. And the emergence of a movie has changed everything. This movie is Star Wars. Star Wars director Lucas didn't understand technology, but he saw the special effects made by Industrial Light and Magic and deeply believed that computer technology would be an important part of filmmaking. There is finally a respectable figure in the film industry who is willing to invest in us.

▲ sci-fi film Star Wars

In 1979, I left NYIT to start building a computer department at Industrial Light and Magic. Lucas has attracted a lot of people interested in the industry, and he is ambitious to change the three main parts of filmmaking: visual effects, video clips and digital sound effects. So we ploughed deeply in these three plates. His company is located north of San Francisco, which means we can drive to Silicon Valley for an hour or fly to Hollywood for an hour. San Francisco is a good place because we can get to Silicon Valley and Hollywood quickly, and it is relatively remote.

Fortunately, George supports our decision to publish the results to a larger community. One of our competitors bought a $10 million Cray-1 supercomputer, so we discussed "what it takes to make a high-quality movie of the future" and did some calculations. The end result is that we need the computing power of 100 Cray-1, but we can only afford the price of 10. According to the calculated speed index curve, we need another 14 to 15 years. So we'd better spend our time and resources on many of the problems we see now.

If we are to set some crazy goals, we need to be clear about what the problem is and what action we need to take. Coincidentally, when we finished Toy Story 15 years later, we were very close to our previous estimates of computing power. The process of working under the influence of future changes is very important. In order to deal with the film resolution image, we need to design and build a system to save the whole filled resolution image in the computer. This requires more parallel processing available in workstations, so image computers emerge as the times require.

In terms of rendering, Lauren Carpenter has developed a new way of rendering that can handle high complexity. As Pat said, Rob joined us after we made great progress in lighting and shadows. The three of us met at the whiteboard in my office to discuss what our big goals for the future should be.

At that time, the number of polygons rendered by SOTA was about 40, 000, and we calculated by Pat that our goal was 80 million. I don't know why we didn't round up to 100 million, which popped up in the calculation. This is not only to meet the high standards of Industrial Light and Magic, but also our goal. Our pursuit of complexity, motion blur and depth of field is crazy. We want to set and pursue an ridiculously high goal, so we force ourselves to think about it in a completely different way. This has led to the birth of a series of new ideas and changed the rendering complexity starting with the Lawrence architecture. Rodney Stock, the hardware designer who works with us, suggested that we consider point sampling, which is similar to the dithering used in printing. Rob did this experiment and tried a variety of different methods to carry out the Monte Carlo distribution of samples, and finally he came up with a good way to achieve sample distribution. Tom porter puts forward a key idea that he distributes the samples over time, which solves the problem of motion blur.

Then Rob rewrote a clear object-oriented architecture that enabled software to evolve with the development of new technologies. We know that unless computing power increases at least 100 times, this is not realistic, but we also know that sooner or later, our ideas will eventually become a reality. I conceived a short film to show what we are doing at Lucas Pictures, and this short film is Andre & Wally B. John Lasseter joined us and created animated characters and gave them life that only a really good animator could give. It was an exciting era, when highly creative people broke free from shackles and boundaries in their respective fields.

However, the situation at Industrial Light and Magic changed, and by the end of 1984, George Lucas found it necessary to sell the computer division. Eventually Steve Jobs bought the department. Although George told him that we wanted to do animation, Pixar was born.

So we started the business of making and selling special-purpose computers, which I never expected. No one, including Steve, had any experience in making and selling high-end hardware, so we made a lot of mistakes. We hired manufacturing people to write software for our customers, and Disney was one of our customers, and they wanted us to color the hand-drawn cells.

To my surprise, I learned a lot when we started manufacturing. I used to think that manufacturing was a rather mediocre thing, but I was wrong. Although we have failed, these failures are the result of many adjustments made by the enterprise, not what we have really done wrong. We can't compete with the accelerating Moore's Law, and the potential momentum of this moribund struggle has brought a lot of changes, and it's time to get out of the hardware business and focus on software. Because we wanted to maintain Disney's trust in us, but the software we wrote for them could only run on our hardware, we sold the hardware business to another company to make images.

We signed another contract with Disney to transfer the software to SGI. Jim Clark, a graduate of the University of Utah, created SGI using the Geometry engine (Geometry Engine) and the predecessor of GPUs.

At this time, there is no fierce competition among workstation companies. The rendering quality of the pictures is good, but they are all difficult to use. Jim came to me and suggested that we should jointly design a rendering interface for the industry. Finally, a total of 19 companies participated in the process. One of the decisions I was proud of was that we invited Pat Hanrahan to be our design architect. Pat has won the trust of everyone. He is a listener and a great designer. The design of Pat is very simple, and he has built a complex shading language based on the concept of Robert. He does all this work to make it easier for people to get rendering, and that's the story of the RenderMan interface.

Movie / Game + GPU

In the SIGGRAPH circle, new research progress is published every year. For many years, the Holy Grail belongs to the production of realistic images, but the research of graphics has been extended to modeling, simulation and complexity. How do I model and render fluctuations in water, cloth, or hair? How to simulate natural phenomena? A very important point is, how to control the simulation to meet the needs of the story? These questions are very attractive.

Then the special effects industry began to merge with computer graphics. The special effects industry, which starts with Industrial Light and Magic, has no dogma about how to do things. They don't focus on what they have, they just care about what they get on the screen. As long as they have good ideas, they will use them. It was the most crucial year in 1991, when Terminator 2 was released, with the protagonist produced by CG, and a 3D version of Beauty and the Beast was released that year, and Pixar teamed up with Disney to start making Toy Story.

The pace of progress is accelerated with the increase of computer speed. Jurassic Park was released in 1993, which sent a signal to the film industry that everything was about to change, followed by Toy Story in 1995. From 1991 to 1995, the industry experienced a turning point of major changes in technology acceptance.

At the same time, the game industry began to rise. At that time, 3D games were still crude, but they were already impressive. John Carmack promotes the implementation of 3D games on PC.

Nvidia began to make chips after it was founded in 1993. They designed and manufactured a chip within six months and began its Nvidia culture, which aims to release a new chip every six months, an unprecedented release cycle. AMD is a competitor to fast loops that enhance GPU performance. At the same time, SIGGRAPH also has a lot of research on algorithms and lighting simulation, which are what the game industry wants. Nvidia draws inspiration from everything that exists in an attempt to satisfy the growing graphics industry's boundless desire for speed and realism. SIGGRAPH and other academic and entertainment industries are no longer capable of making specialized chips, but the gaming industry can.

GPUs is used in workstations, providing graphics researchers with faster algorithm development machines and publishing more SIGGRAPH papers. There is an excellent cycle between the game, GPU, and the SIGGRAPH circle, which brings an improvement in computing performance and maintains Moore's Law for several years. This is a virtual cycle that no one can manipulate.

Between 2009 and 2012, GPUs began to show usefulness in areas other than games. The matrix multiplication needed for simulation has been useful for many years, so people began to use these multiplications in scientific applications.

The idea of neural network appeared more than 50 years ago, when it gradually became practical, it brought us in-depth learning and had a significant impact on many industries. Just like the wonderful relationship between neural networks and deep learning, the cycle between GPU, games and academia has produced completely unexpected surprises. Computer graphics was marginalized at first, then experienced an unprecedented crazy roller coaster, from a marginal discipline to an important pillar of many industries and computer science, and this change will continue. It is difficult to predict where we will go in the end, but we need to keep working hard.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report