Quantcast
Channel: Reviews – NYC Production & Post News
Viewing all 27 articles
Browse latest View live

A Review of iPi Soft’s Markerless Motion Capture System

$
0
0

iPi Soft

A Review of iPi Soft's Markerless Motion Capture System

Motion capture or mocap has made its place as part of a modern animator’s toolkit. For many styles of animation, going with mocap instead of traditional animation saves time and cuts budgets. However, until recently, only productions capable of investing in many thousands of dollars worth of cameras and software from companies such as Polhemus and Vicon Systems could even consider using this approach. However, in the decades since mocap began as a tool for photogrammetric analysis in biomechanics research, advances in technology have continued to drop the price point of entry.

Harnessing Game Gear

Now, iPi Desktop Motion Capture software from Moscow-based iPiSoft combined with Microsoft’s Kinect interface has pushed mocap down to a price point well under $1000. But is this a usable combo for small producers considering a mocap-based project? When I first heard about iPiSoft's markerless motion capture technology, I was intrigued. When used with two standard Microsoft Kinect motion sensors, here was a product that promised to deliver accurate motion capture for a fraction of the cost of traditional mocap systems. The setup also promised to do mocap without the use of markers. While these reflective objects affixed to an actor’s suit have traditionally been used to track motion, the use of them raises the price of a setup. iPiSoft’s app and two Kinects, meanwhile, constitute a minimal package that also doesn’t require alternate gear such as inertial sensors, nor teams of technicians or even dedicated studio space. Quite a claim. But how justified was it? After putting it through its paces, I came up with an interesting conclusion.

Motion Capture Chat

Before I put iPiSoft's technology to the test, however, let’s first hear from the man behind this innovative software: Michael Nikonov, iPiSoft's founder and chief technology architect. He was part of a team from Samsung that received a patent for an innovative markerless approach to motion capture this past year. According to Nikonov, markerless motion capture is not only able to compete with established approaches to mocap, but it is poised to become the predominant system in the future as its accuracy is already very comparable to standard marker-based systems employed by Hollywood. "I would compare older mocap solutions to antiquated mainframe computers," says Nikonov. "Now, the use of mocap for animation won't be limited by your budget but only by the amount of creativity you can offer." Let’s take a brief look at marker-based motion-capture technology, such as a setup from the well established British-based company Vicon. Bonita, their latest entry level product line, uses a combination of small IR (infra-red) reflective markers on the actor’s suit along with cameras that employ IR LED lights. Multiple cameras ring the performance space so that by employing real-time triangulation, the Vicon software can locate points on the actors within 3D space to construct movement. Bonita systems start at $10,000. Markerless systems such as iPiSoft’s don't rely on established points such as reflecting markers but use complex algorithms to parse the pixel depth information sent to Kinect's cameras. Kinect's camera system also uses infrared projections with a webcam-like camera, as well as a processor to track the movement of objects and individuals in three dimensions. This 3D scanner system employs technology from companies including PrimeSense, GestureTek and 3DV, which Microsoft bought in 2009. (Some of you might remember 3DV for their real-time 3D camera system that turned up in JVC's booth at NAB a years ago.) Nikonov says that the markerless approach is more reliable, especially when it comes to dealing with occlusion (a term to describe what happens when parts of an actor's body are hidden by other parts, such as when an arm goes behind a torso). The only way to fix occlusion problems in a marker based system, according to Nikonov, is by using more cameras, with the result that some setups can deploy up to sixteen or twenty devices. Due to all of those cameras, as well as the complexity of implementing the sensors and the suits the actors must wear, you'd better count on having a few technicians around to make sure it all works. Naturally, all of this makes a marker-based mocap shoot very expensive. In fact, as Nikonov points out, "everything related to marker-based is expensive. Just the suit alone is more expensive than an iPiSoft system". Add in all the cameras, wires, the bother of applying the sensors to the actors, the space to set all of this up in and the price for marker based motion capture soars even higher. In contrast, the iPiSoft markerless software costs $595 for the basic version ($995 for the standard version), while you can pick up a Kinect for about $120 each. On the top of iPiSoft's home page on the web, you'll find the phrase "Motion Capture for the Masses". I would agree with them. Even independent animators or small studios can afford it. However, the question still remained: How good does it actually work? After my conversation with Michael Nikonov, it was time to find out for myself.

A pair of Kinects

As mentioned, iPiSoft's system utilizes two low cost and readily available Kinect motion sensors. Manufactured by Microsoft, the Kinect motion sensor input device was originally developed for the Xbox 360 video game console and released in 2010. The depth sensor includes an infrared sensor which captures 3D data under just about any ambient light condition. Previous versions of iPi Soft motion capture software included support for only one Kinect (or multiple PlayStation Eye cameras). Now with the ability to collect data from two Kinects, the resulting depth information delivers much more accuracy. As iPi Soft notes on its site, the setup can even capture 360 degree turns by a character, which is an impressive achievement considering the system has to figure this out by detecting subtle differences in a character's position. Along with the Kinects came two USB active extension cables; I used those to connect the Kinects to my computer, in this case an HP Z800 workstation. This is an extremely powerful computer that's not a budget killer either (you can check out my review here). I'm also using an Nvidia Quadro 5000, a good match for 2D and 3D work. Needless to say, this system provided more than enough power for the job.

Setting Up

Before you can start capturing motion, you need to set up everything properly. Naturally, this isn't as difficult as with marker based systems. However you do need to use a little care in order for it to work. The first thing you need to do is figure out where the performance will be done and then position the Kinects accordingly. Obviously, the more space you have to move, the better off you are since you must allow space for the action as well as a certain amount of room for the sensors. If you have an unused office or room, that would be ideal. I set it up in my home office, where I faced a bit of a tight squeeze, but there was sufficient space to get a wide range of motion. This is important since one of things I find compelling about iPi Soft's dual-Kinect motion capture system is that it is within the grasp of individual animators (both financially and from a technical standpoint). While studios who lease a large commercial space will have no problem, it is important, in my opinion, that it also works within more confined spaces as well. iPi Soft recommends a 10-foot x 10-foot space to work within and the capture area is 7-foot x 7-foot. This is actually limited by the Kinect's sensor range, not by the software. For more details see this page. When it comes to positioning the Kinects, you've got two options. You can place them so they point at the staging area between 60 and 90 degrees from each other. The other option is to position them at an angle of around 180 degrees. I chose the former option (60 to 90 degrees) which fit in better in my space. I also mounted each Kinect on its own tripod so that they were both around waist height.

Calibration

After deciding where the action would take place, as well as positioning and pointing the Kinects at the proper angle, the next step was to perform a calibration. This is necessary in order for the iPi Soft app to understand where each Kinect is located in relation to each other. Having two Kinects (as opposed to one) allows for wider coverage area for your motions as well as improved accuracy, but calibration is a must for the system. Performing the calibration was straightforward. All you need is a relatively large plane. The online manual suggested you use a rectangular piece of plywood, cardboard or veneer around 30" X 40". I decided to use a large painting that was hanging on my wall; that turned out to work fine for the task. iPi Soft comes with two software modules: iPi Recorder and iPi Desktop Motion Capture. iPi Recorder is a free download and you use it to record depth information from the Kinects. The iPi Desktop Motion Capture module requires a license. We'll talk about that module later. For now, to record the calibration video (as well as the motion capture itself) you must first use iPi Recorder. Upon launching the app, it immediately recognized the Kinects and began to stream their output to my computer monitor. Depth sensor data is represented by bright, saturated colors in iPi Recorder — vivid reds, blues, violets and yellows. After pressing the record button and letting about two seconds go by, I entered the scene holding the calibration plane and, as instructed to do so by the manual, recorded myself tilting it towards each camera, left to right, for a few seconds. It is recommended that during this process, as well as during the motion capture itself, you try to minimize the appearance of yellow pixels. They represent areas where the Kinect sensors are unable to determine the depth (note that you don't need to remove all yellow pixels, just try to get as little as possible). Interestingly, I found that shining an intense light on the set increased the amount of yellow pixels so you don't need to worry about brightly lighting your set when doing your mocap. What follows is a video demonstrating what the calibration video looks like from both Kinects. It should be noted that iPi Recorder recorded the depth video at 30 fps, but the playback rate here is 15 fps due to the screen capture utility I used. http://vimeo.com/35760821 After recording the calibration video, I opened it up in the iPi Desktop Motion Capture software module. Upon loading it, I was able to gain insight into the depth information acquired by the Kinects . When looking at the brightly colored pixels in the depth video straight on, all of the pixels looked like they were on a flat X and Y plane. However as I revolved around the viewport, the pixels revealed that they also had Z depth. The mo cap software had created a set of 3D pixels based on the Kinect data whose collective position described the physical make up of my space including the walls, floor and other features such as furniture, light fixtures and other items. It also represented my body. I deduced that the reason I had to wait a few moments before entering the scene was so iPi Soft had the data needed to separate me from the rest of the scene. To calibrate the data from the two cameras, I first trimmed the frames to include only the tilting motion of the calibration plane and pressed the calibration button. iPi Soft analyzed the corners of the plane from the two different views over the range of frames and then was able to understand precisely where the cameras were located within space. This alignment is critical in order to record the depth information accurately. http://vimeo.com/35787098 The final step in calibration was to save the calibration data as an XML file to my disk by pressing the Save Scene button under the Calibration tab. From then on, you must load the calibration file every time you work on a motion capture segment by pressing the Load Scene button. If you don't do this, the system won't know where your Kinects are located and will be unable to process the data from the two depth sensors. If you move the Kinect motion sensors accidently or reposition them to different locations, the calibration process must be done over again. Therefore you should try your best to be careful not to bump into them. However, the calibration process does not take that long, so it's no big deal to re-calibrate them if their positions shift inadvertently. But as long as you don't move the Kinects, you can keep using the same XML calibration file you saved for every motion sequence you capture.

Time for some action

After calibration, it was time to record some motion. I launched the iPi recorder, put on my dancing shoes and after waiting a couple of seconds (just as is required with the calibration video), I stepped into the staging area. Before you can begin acting out your motion, the first thing you need to do is assume a T- Pose. For those unfamiliar with the term, this means you stand upright with your arms extended perpendicular to your body. After a moment or two of holding this position, you're ready to begin your motion. While I'm no Fred Astaire, I managed to pull off a little number that included a few kicks, some side stepping and a couple of 360 degree full body turns to see how well iPi Soft would follow. http://vimeo.com/35788133

The right track

After my dance routine, I opened the depth video of the dance I recorded in iPi Recorder in iPi Desktop motion Capture. Included in the application is a completely rigged human form in, you guessed it, a T-Pose. After locating a frame in video where I stood in the T-Pose, I maneuvered the included figure to more or less match mine using the move and scale tools. http://vimeo.com/35812895 Once both the iPi Soft figure and my figure were roughly occupying the same position, I hit the Refit Pose button which makes the included figure conform more precisely to my starting T-pose. After this, I trimmed the frame range to include only the motion I wished to capture and hit the Track button. At that point, iPi Soft began to analyze each frame captured by the motion sensors and refit the 3D character and its rig to follow my motion. http://vimeo.com/35814064 This tracking step is what actually creates the motion capture data for the skeletal rig. When the tracking was finished, it was possible to watch the provided iPi character do the same dance I made on the staging area.

Post tracking

IPi Soft also provides further post-processing tools to improve the track, chief among them is Jitter removal, which implements an advanced algorithm that removes any blips or glitches which might appear in the motion capture after performing the track. The jitter removal does a remarkable job, but you also have the option of going back to selected areas of your performance and retracking if you wish to do so as well. Also available is the option to apply trajectory filtering which further irons out the motion capture data nondestructively. You can dial in a variable which sharpens or smooths out the motion depending on how high or low the number is.

Moment of Truth

After the tracking and post-processing actions was applied, it was time to watch what it looked like. I dragged the playback head to the beginning of the motion, hid the depth video and pressed play. I wasn't sure what to expect since this was my first real attempt at using the software and I thought that I would have to give it several tries before I would end up with something usable. To my surprise, and sincere delight, the provided iPiSoft human character went through the motions of the dance fluidly and convincingly. I was impressed. As I said, this was the first real time I used it and the results were pretty great. I'm not saying I was skeptical at first, but I knew how much motion capture rigs cost compared to an iPi System and wondered how good it could really be. It was becoming clear that it was good. Really good. http://vimeo.com/35814610

Working with other software

Naturally, software like iPi's desktop motion capture system is meant to work in tandem with other 3D software and, once again, it receives high marks in this regard. A quick way you can apply the motion capture data onto a character right inside of iPi Soft is by clicking on Import Target Character under the Export tab. When I did this with a DAE character (from Evolver) for a target, the motion capture data was immediately applied to it with virtually no other work on my part. http://vimeo.com/35853610 Next, rather than bringing in a 3D model into the motion capture software, it was time to bring the motion capture into a fully featured 3D program. iPi Soft provides several different options for this and is compatible with the most popular software packages and formats as Cinema 4D, Maya, MotionBuilder, 3D Max, FBX, COLLADA, BVH, LightWave, Softimage, Poser, DAX 3D, iClone, Blender and more. For the purposes of this test, I decided to use Maxon's Cinema 4D R13 since it is a highly advanced 3D package with robust character animation tools. Cinema 4D is also part of the production pipelines at major studios such as Sony Pictures Imageworks and Rhythm and Hues. For more info on this very useful app, read my review of Cinema 4D R13 here. (Note: I also imported it into Maya and Motion Builder for testing and it worked fine). I exported the motion capture data as a BVH file from iPi Soft and, as expected, I imported it into Cinema 4D with no problems at all. Immediately you could see the rig in the viewport and its complete hierarchy of joints was contained in the project window. Glancing at the timeline, I could see keyframes for the position and rotation of each joint — one for every frame and upon hitting the play button, the rig began to move just as it did in iPi Studio. http://vimeo.com/35852103 Finally, I modeled a simple character in Cinema 4D, bound it to the rig and quickly weighted the vertices at the joints Then I added a light and rendered it out with Cinema 4D's new physical renderer which, among other things, gave it some nice motion blur. Here is the resulting animation created with iPi Soft's markerless motion capture system and Cinema 4D: http://vimeo.com/35898135

Conclusion

I really enjoyed working with iPi Soft's markerless motion capture system. I think it is a great product and would definitely recommend it. Not only was it able to do what it claimed to, but it did it well with a minimum of fuss. I also found it production ready with "off-the-shelf" usability. If you require more capture area for things like battle sequences or athletic movements, you may wish to consider iPi's standard Edition which works with up to 6 Sony PS Eye cameras and allow capture areas up to 20-foot by 20-foot. Enough for most kinds of motions and still much less expensive than products from Vicon, PhaseSpace, and Animazoo. But if you are a 3D animator with wants to try a mo cap solution with minimum overhead, this might be all you need. If you are a production studio or game company, you have no excuse not to give it a try — especially for the price. From where I stand, I know of no other software program quite like iPi Studio and after my conversation with Michael Nikonov, I get the feeling that we are going to see exciting new developments in the coming months as it continues to develop. For example, Nikonov mentioned that iPi Soft may develop their own cameras or depth sensors. I would imagine that they would offer more capability at a similarly low price. If you are serious about animating with motion capture, or are just getting into it, do yourself a favor, check out iPi Studio. You'll be glad you did.

Original article: A Review of iPi Soft’s Markerless Motion Capture System

©2013 NYC Production & Post News. All Rights Reserved.


Unrivaled Power: Review of the HP Z820 Workstation

$
0
0

The Z820 takes top position in HP's workstation line-up.

Unrivaled Power: Review of the HP Z820 Workstation

The new HP Z820 workstation is uniquely qualified for the special demands of high-end production and post. Putting this powerful workstation to the test over the past couple of weeks has only confirmed this conclusion. We’re giving the Z820 an NYCPPNews Editor's Choice award. That’s a distinction we give to exceptional products that we feel will play a key role in the production and post industry. Let’s show you why we think so. To begin, here's a video that summarizes the Z820's advanced features: http://vimeo.com/41870068 If you’re curious about the Z820's predecessor, read my review of the HP Z800 workstation, which previously topped HP’s Z-line of workstations. The Z820 workstation, like its predecessor is an excellent choice for serious motion picture and video editing with programs such as Adobe’s Premiere Pro, (an application that continues to gain popularity in the pro-editing space) as well as Avid's Media Composer (well entrenched among elite motion picture editors). The most significant improvements of the z820 include the latest Xeon E5 2600 processors, PCI Express 3.0 (integrated into the processor), USB 3.0, DDR-3 memory and capacity enhancements, I/O slot improvements, and overall better performance. The release of the Z820 happens to coincide with Adobe's new version of Premiere CS6, a part of the Creative Suite, which has garnered several important awards at NAB 2012. Avid's venerable Media Composer version 6 was also released recently to critical acclaim. Sporting the latest generation Intel processors and able to handle high performing graphics cards, the Z820 is also well suited for 3D animation, where image rendering can severely tax other, less endowed systems. 3D modeling and animation relies on the GPU for real time rendering and the GPU sitting in in my review Z820, the Nvidia Quadro 6000, was certainly up to the task. This is the current top of the line card (and priced accordingly at around $4K). The 6000 offers 6 GB of RAM, 448 CUDA cores and claims to deliver an astounding 1.3 billion triangles per second. However, it is the CPU that renders the final frames. This involves some heavy computation, as it takes a chunk of time to calculate the final lighting effects, textures, depth of field, motion blur, particles and the multiple other subtleties that go into making outstanding 3D imagery for motion pictures. When working at 2K or 4K resolution, It can easily take not only hours to render out a sequence, but days (depending on its length). In this regard, the dual 8 core Xeons (combined total of 16 cores) running at 3.1 GHz with 20 MB Level 3 cache can crunch through complex rendering jobs pretty quickly if you don’t have a render farm handy. It’s during the video editing process, of course, that filmmakers finally get a sense of what their finished project will look like. Therefore it’s crucial that the editing workstation is powerful enough to play back a reasonable facsimile of the finished production from the NLE’s timeline. Today's workstations must therefore be capable of delivering multiple high-resolution video streams, music, sound effects and transitions while not dropping frames. Fitted with an SSD RAID, the Z820 doesn’t have a problem as it delivers up smooth high-resolution video. Since Adobe’s Mercury Playback engine (a capability built into many of the CS6 apps) is directly keyed to accelerate with the Quadro 6000, the final speed-up is even better.

A closer look

Whether it’s punishing 3D rendering, video editing, compositing, audio production, or color correction, the Z820 seems to breeze through it. Its handsome rugged brushed metal sides are both solid and aesthetically pleasing, but with a basic weight of 43lb., the machine isn’t laptop light. For anyone doing high-end post who requires the utmost in power, weight doesn’t usually matter however. HP builds in a set of sturdy handles at the top of the Z820, which make moving it around simple. One of the nicest things about the Z820 is the ease of just flipping it open to expose the interior of the machine. Reflecting HP’s philosophy of tool-less maintenance, not only is it a cinch to open the machine, but it's easy to service as well. With no tools needed, it’s a lot easier to replace components such as the power supply, for example, which pops out by simply pulling on a handle. Once you’re inside you’ll be impressed with how neatly it’s laid out, with an intelligent and thoughtful layout that won’t have you skinning your knuckles as you might have years ago. After removing a couple of airflow covers you’ll get to one of the key components of the Z820 — the CPUs. My review machine sported two of the latest eight-core E5-2600 Xeon processors from Intel running at 3.1 GHz. With 16 cores of processing power (32 virtual cores), there's a potential that things can get hot. That’s why the addition of liquid cooling for the CPUs impressed me. This is something you used to only find in expensive custom systems or industrial-style servers. The E5 family of Xeon processors includes many significant improvements over prior generations including Sandy Bridge architecture, an improved core architecture, higher clock speed, and doubling of peak floating point performance. The E5-2600 Xeon processors could offer a big performance boost with their 20MB of L3 cache.

Drive into the bay

The Z820’s seven hard drive bays, which partly resemble a tidy stack of drawers, need only a light tug to remove or insert. Of course, the drives can be configured to be a RAID5 setup, a common choice of many video editors. While solid-state drives (SSDs) have been pricey, this is finally starting to change. The Z820 is the first workstation I have ever used that started out with all SSDs and the performance is magnificent. Be warned, once you get a taste of working this way, it’s hard to go back to spinning media. My review Z820 included three 300GB Intel SSDs. One is used for the OS and applications, while the other two are in a 600GB RAID 0 configuration. Depending on how much source footage you have, you could easily fit several feature-length editing projects onto that. Granted, RAID 0 doesn’t offer any security, but the simplicity of SSD architecture has me convinced that for my own work, at least, I’ll take a chance that they won’t crash, plus I make sure that I have everything backed up in several places. Working with the SSD RAID is a real pleasure. I am currently editing an hour long movie in Premiere Pro CS6; the timeline has over a thousand edit points in it along with multiple tracks of sound effects and music. As mentioned above, the NVIDIA Quadro 6000 video card greatly accelerates Premiere, since it incorporates Adobe's Mercury playback engine. Combine that with the speedy SSD RAID and I found the result was virtually perfect real time playback from my timeline. For video editors working with a client, this level of interactivity is an important part of a successful editing suite experience. With its SSDs and liquid-cooled CPUs, the HP Z820 is whisper quiet, an important design benefit for a high-end workstation. If you’re sitting in a small edit suite for hours, you and your clients will appreciate this, although you might forget the machine is running. As an up-to-date workstation, the HP Z820 features built-in USB 3.0 ports on the front and back of the chassis. As the new standard is 10X the speed of USB 2.0, you just might be tempted to add on some external USB 3.0 storage.

A short history lesson and editorial

The Z820, which takes over from the Z800 as the top of HP’s workstation line, is a logical choice for high-end production and post facilities, editing suites and color correction houses. I don’t know of any other Windows workstations expect pricey custom configurations, that offer what it does out of the box. These thoughts have caused me to reflect on a few important changes in our industry as well as the historical currents that have brought them about. Of course, many people already use Windows-based workstations for high-end production and post, whether they are 3D animators, compositors, designers, video editors, game creators or music producers. However, some readers might still consider video editing, graphic design, and audio production to be the domain of the Macintosh. If you happen to be one of those people, the next few paragraphs are for you. Over the past few years there have been major shifts in the postproduction industry, with major players moving into and out of the field. I’m not the first who has wondered whether Apple has lost interest in the pro market. I've recently read several articles written by former Mac users who have moved to HP workstations for their design and editing suites. This decision comes from many factors, but the one that’s attracted the most attention to date is the perception that Apple's release of FCP X abandoned a whole ecosystem standardized around the prior version’s interface. Other changes from the Mac’s once top position has been the continued professionalization of Premiere Pro; Adobe’s push to make the entire Creative Suite fully cross-platform; Avid’s release of the redesigned Media Composer 6 as well as Pro Tools 10; and access to ever more powerful 3D software on the PC. It doesn’t hurt that ever more powerful Windows-based workstations deliver more while coming in less than a tricked out Mac Pro tower. Even so, some of you may still be unsure about the switch to the Z820’s Windows-based platform. If you're a longtime Mac user considering that switch, you might find this brief history lesson helpful. The notion of the Mac as a graphics and editing machine can initially be attributed to the influence of products from Adobe and Avid. In Adobe's case, access to products such as After Effects, PostScript, Photoshop, and Illustrator (now you can add Premiere Pro to that list) were major reasons why people bought Macs in the first place. Similarly, Avid Corporation's Avid/1 (which started out as a Mac-only product) helped make the Mac the most popular editing platform. Avid's Pro Tools also made the Mac popular among audio and recording professionals. To a large extent, Apple owed its success in the graphics and video arena in part to Adobe and Avid, as well as Data Translation’s Media 100 system. Meanwhile, designers behind Adobe Premiere’s initial NLE left the company to create their own edit system to their liking. Steve Jobs eventually offered the team and its initial product a place in Cupertino so that he could have an NLE to compete with Premiere. Thus Final Cut was born. Over the years it’s ease of use, third party support, and Apple’s enthusiastic backing made it popular among editors. (Avid Media Composer, playing to a much smaller market, was and continues to be the favorite of the great majority of Hollywood and top-end broadcast editors.) Fast forward to today. The widespread impression is that Apple has become disinterested in the pro video market causing many to speculate whether the Mac Pro tower, a machine targeted squarely at pros, may be discontinued altogether. Once again, the Cupertino-based company didn’t have a booth at NAB this year. Why the loss of interest? It's probably a safe bet to say that Apple, the highest valued company in the U.S., rakes in vastly more cash from the sale of smart phones, iPads, iPods, and the regular downloading of millions of songs from iTunes. Final Cut Pro X works fine on a MacBook Pro or Air, and some even tout it for what they perceive as advanced features that move well beyond the now ubiquitous timeline interface. But while Apple might have developed an advanced editing product for individuals, it’s turned away from a standardized, widely supported interface that allows pros to collaborate. In any case, for these reasons and others, many have come to see HPs Z Workstations as, not only the highest performing machines in their class, but the ideal professional production, post and video editing machine with the ability to run the pro apps that they need. If you do a little comparing of the HP Z820 to other systems out there, including the Mac Pro, I think you'll find out why.

Benchmarks

I ran the Z820 through Cinebench, Maxon's popular hardware 3D benchmarking application. It received an impressive score of 25.41 on the CPU test and 87.78 on the GPU test. Here is a chart that compares the Z820 with the Z800 and a Mac Pro.

Final words

To those seeking the utmost power and performance in a workstation, I don't think you can do better than an HP Z820. Our decision to give it a NYCPP Editor's choice award reflects that decision. For more information and the latest pricing, check out HP’s website here.

Original article: Unrivaled Power: Review of the HP Z820 Workstation

©2013 NYC Production & Post News. All Rights Reserved.

Useful Tools: Our Review of Red Giant’s Magic Bullet Suite

$
0
0

RedGiantPost

Useful Tools: Our Review of Red Giant's Magic Bullet Suite

Since its beginnings in 2002, Red Giant has become an increasingly important maker of visual effects software. The Beaverton, Oregon-based company publishes a range of different apps for filmmakers, broadcasters and other moving media creatives. Most relevant for our readers are the four popular suites of plug-ins that the company continues to publish, support and upgrade over the years. The Magic Bullet Suite, Trapcode Suite, Keying Suite and Effects Suite are versatile collections popular among motion graphics designers, compositors, visual effects artists and filmmakers. Together, they offer a range of creative possibilities, not only for After Effects users but also for those using NLEs including Premiere Pro, Avid and Final Cut. One or another Red Giant plug-ins turn up in any number of feature films, music videos and television shows. Recently "Plot Device", a short film funded by Red Giant that of course uses many of these tools, won 16th Annual Webby Award for editing. You can watch that film here. We plan to cover all of Red Giant's suites in upcoming reviews. For now, we'll focus on one of the all time favorites of many editors and compositors, Magic Bullet Suite, now in Version 11.3. The initial version of Magic Bullet came from Stu Maschwitz, a long time production and post pro who also writes an influential blog. After leaving ILM, Stu founded The Orphanage, a well-known effects house in San Francisco. While there he began developing the suite of plug-ins that eventually became Magic Bullet, which allow you to create "professional Hollywood-style results on an indie budget" as their website puts it.

Colorista II

High on the useful list is Colorista II, an advanced color correction and grading tool. Colorista allows precise and detailed color tweaking; it works in After Effects, Premiere Pro and Final Cut. It has a true Lift/Gamma/Gain correction system along with mask and color key isolation tools. With Colorista, not only can you fix color problems with your footage, but you can also achieve many unique, artistic looks. Colorista takes a three-stage approach to color grading: Primary stage, Secondary Stage and Master Stage. That makes sense since color correction is often done in multiple passes. Sometimes this is done by applying several instances of an effect to your footage. With Colorista's three stage approach you can pretty much do it with a single instance of the plug-in. The first control you'll encounter in the Primary Stage is Primary Exposure, useful for setting the overall brightness of the image. Next there's the Primary Density which seems similar to a gamma adjustment. You'll also find a highlight recovery control which is meant to restore lost detail in blown-out highlights. Primary stage also contains a three way color corrector with color wheels to adjust shadows, midtones and highlights. Each wheel has controls that let you shift the hue, saturation and luminance within each of these ranges. As part of the three-way color corrector, you'll find a rather unique HSL corrector. The HSL Color corrector contains two wheels, one of which controls saturation, and the other controls lightness. On the edge of the wheels are small handles or dots of color which represent each hue. By grabbing the color dot you want to adjust, you can push it towards one of its neighbors to make it become more like it. For example, you can push the yellows toward orange, or the blues toward purple. You can also drag the color dot in and out towards the center of the wheels. This will either increase or decrease the saturation of just that color or darken or brighten it. I found this tool to be extremely useful for changing selective colors in a scene. For example, let's say your shot had a field of red poppies. with the HSL wheels you could increase the saturation or brighten up just the flowers without affecting anything else. I've actually not seen this method before, but I highly recommend it. There's also a Auto Balance color picker that lets you fix the white balance in your footage. By clicking on it and then clicking on some pixels that should be nearly white, you can achieve a natural tone and remove the kind of unsightly casts that result from improper white balance. I also found this an extremely useful tool I'll probably turn to often. At the bottom of the Primary stage is a Primary Mix adjustment setting that ranges from 0 to 100. By changing this value, you can blend the results of your adjustments with the original source in one place without having to increase or decrease the controls individually. Colorista's second stage of color correction allows you selectively apply your color correction to an area you define such as a rectangular or elliptical mask, referred to here as a Power Mask. Naturally, you can position the masks anywhere and have complete control over their size, rotation and feathering. You can also restrict your color correction based on a color key; Colorista provides a built in keying interface to use for this purpose. The secondary stage also has its own three-way color wheels, identical to the ones found in the primary stage. These color correctors will only affect the keyed and masked areas. If no areas have been keyed or masked, they will work over the entire frame. Also available in the secondary stage are two more exposure and density controls. These function similarly to the ones in the Primary stage, but like the color wheels are restricted to the Mask and Key areas. If none are present, it too reverts to affecting the entire image. There's also a control called Pop in the secondary stage. A bit like sharpening, Pop can bring the detail out in your image by sliding it towards the positive range. It can also soften things up by sliding it into the negative range. This can be useful, for example, in smoothing out wrinkles on the faces of older actors. Colorista II's secondary stage adjustments also include secondary Saturation and Hue controls, which affect the entire dynamic range. You'll also find a Mix control that, like the Primary mix, gradually blends in or out the secondary stage of correction tweaks. Finally, Colorista II has the last stage of correction controls, called the Master Stage. Like the first two stages, the Master Stage contains exposure, density and mix controls. It also contains a three-way color corrector as found in the other two stages as well as the remarkable HSL corrector found in the Primary Stage. However, the Master Stage also contains something unique that the other stages don't have, namely red, green and blue curves. Here you can adjust the RGB channels, either together as a whole or each one individually. The Master Stage also lets you define a rectangular mask which you can use to restrict the Master Stage. Under the Master Stage are three more controls. The Show Skin Overlay checkbox helps you to achieve proper skin tones by overlaying a grid pattern over the skin tone. The more you correct your footage, the more the grid pattern appears where true skin tones should be. I find Colorista II to be a truly advanced color correction plug-in that offers sophisticated abilities right inside of After Effects and Premiere. In addition, I found the useful HSL corrector to be unlike anything I have seen before. It can be used to great effect to selectively change the values of certain colors in your footage. Colorista's multi-stage approach is also a great way to apply different stages of color correction to your project without having to revert to duplicated layers or multiple instances of the same effect. Colorista is like getting a whole color correction app in a single plug-in.

Magic Bullet Looks

Another very compelling plug-in inside the Suite is Magic Bullet Looks 2. This sophisticated tool allows for tremendous creative possibilities; with it you can create and apply many different kinds of looks to your footage. Or, if you'd rather not do it yourself, you can access a large library of professionally designed presets which contain everything from practical color correction settings to more stylized and creative effects to help you tell your stories. Looks 2 also works in 32 bit float mode. Thus, all calculations are done with the highest precision and fidelity. To work with Magic Bullet Looks, you can either use it as a plug-in or as a standalone application. If you use it in a host application (such as an editing program), it opens in a separate window. However you choose to run it, once you're inside, you're greeted with an comprehensive and feature packed interface that gives you the sense that this is more than just a plug-in, but a powerful and useful color grading toolset in its own right. Moving your mouse to either part of the screen causes Looks "drawers" to pop out. On the right is the tools drawer which contain a plethora of useful tools you can use. These are organized into five different sections such as Subject, Matte, Lens, Camera and Post. Some of these tools are actually drawn from other Magic Bullet plug-ins such as Colorista or Cosmo. Thus you can think of Looks as a place to combine everything The Magic Bullet Suite has to offer in one convenient place. Pretty cool if you ask me. In Looks you'll find a range of cooling and warming filters which will push your image colors in a coordinated way depending on your preference. You'll also find some nice vignetting tools with interactive control over the amount of feathering. As mentioned above, you'll also find Colorista style three-way color correctors, that HSL adjuster that I really like, gradient exposure tools, curves, lens distortion, and film grain. Really, the list just goes on and on. Magic Bullet Looks offers a very nice workflow and interface. As you drag tools from the tool drawer onto your footage, their icons then appear in a horizontal stack on the bottom of your scene's preview area. By clicking on each of the icons, the tool's controls appear in the Controls section on the right of the screen. This is where all the adjustments happen and you are able to fine tune everything to get just the look you want. You can also rearrange the tool icons in the stack. By doing this, you set up the image processing in a different order. For example, you may wish to see what a Crush looks like before a saturation adjustment or vice versa. Depending on the order, different looks result. You can also toggle a tool off and on in the stack to get a quick sense of the effect it has on the scene or toggle the whole chain off and on. Looks also contains many different kinds of useful vectorscopes and graphs that are a big help when adjusting colors such as RGB Parade, Slice Graph, Hue/Saturation, Hue/Lightness and Memory Colors. Once you find a look that you like, you can save it as a preset for future use. Besides the presets that you create, there are many, many other interesting presets that were created for you to either use out of the box, or as a starting point for you to customize. This approach can obviously save you time. The presets are organized into categories such as Cinematic, Diffusion and Light, Stock Emulation, Music Videos, Classic Popular TV, Tints and many others. I found it to be a lot of fun to try out the different presets on my scene and watch as it assumed many different looks that I didn't have to spend time creating. I found Magic Bullet Looks to be a great environment to try out creative ideas. The abundance of tools and scopes allow for virtually unlimited experimentation and fine adjusting. The extensive presets are a great way to see what's possible and get a head start on your own look. In addition, you can download other collections of looks created by professional color graders and add them to your presets.

Denoiser II

Recently released, Magic Bullet Denoiser II is definitely a piece of software you want to have. You might want to remove a little noise in a scene shot in low light with a good camera, or perhaps you've received some noisy or grainy footage someone shot with some junky old gear. Whatever the case may be, Denoiser II will come to your rescue. I know, because I was in this very same predicament. I was working on a spot that included a scene with a doctor. We had hired a small crew to shoot the doctor in Rhode Island; they sent the footage to us on a hard drive. To our dismay, besides the fact that the lighting was flat, the footage was very noisy. Due to an impending deadline, we had no other choice but to use it. I was a little unsure of how good it would work, but it turned out that I had nothing to worry about. With practically no modification to Denoiser's initial settings, the footage cleaned up gorgeously and what was previously unacceptable was now usable. Denoiser works best in 16-bit or 32-bit (float) color, so it's a good idea to set your project to either of those modes. After running a denoising pass which analyzes your footage first, the plug-in should produce good results immediately without you having to do anything more. That's how it worked with my footage. However there are several other controls you might use. The Noise Reduction slider controls how much noise to remove while Motion Estimation can help with footage that contains a lot of motion. There's also an Enhancement control which can pull out fine detail (sharpening) in your footage Denoiser II also lets you choose which frame to sample the noise from with the Frame Sample control. This can be useful if your footage has a lot of texture that could be misinterpreted as noise. In most cases, this is not necessary, but by choosing a frame that was shot with the same camera and lighting conditions without the noise-like texture, you can avoid problems that DeNoiser might have identifying what's noise and what isn't.

Mojo

Magic Bullet Mojo is a very cool plug-in that gives your footage a stylish Hollywood look. While its effects may be approximated with some of the other tools in the suite such as Colorista or Looks, Mojo was tuned to get a certain look and it does that quickly and efficiently. Mojo achieves this particular look by cooling off the shadowy areas of the pictures while warming up any skin tones. The result is a dramatic looking image that has 'mojo', evoking the look of some of the today's biggest blockbuster movies. You apply Mojo to your footage by increasing a slider which dials in the effect. It's fast, and works very well right out of the box. However, the plug-in has several other useful sliders which help to adjust the look and provide other many other creative looks. These sliders control the amount of tint and color balance in the image and allow you to do things like warm up the skin tones or give the image more of a bleached look. You can also add a skin tone grid overlay, common to many of the plug-ins in the Magic Bullet Suite, which helps you keep track of what the proper skin tones are while you affect the rest of the frame. Mojo is cool.

Cosmo

The next tool in the Suite is called Magic Bullet Cosmo. This plug-in was made to smooth out skin tones and blemishes so that imperfect skin can look more perfect. It is useful in for close ups and can make people look younger and more glamorous. Think of it as a cosmetic plug-in, hence the name, Cosmo. At the top of the Cosmo plug-in is the Skin Color slider. This slider adjusts the all over color of the skin and lets you push it towards red or green as desired. The next few sliders allow you to control how much softening occurs as well as other parameters. If you work in fashion or do a lot of glamour shots, Cosmo should be in your toolbox.

Frames

Magic Bullet Frames consists of several useful utility plug-ins. Frames Plus converts interlaced video footage shot at 30 frames per second to 24p, the common motion picture frame rate. There are controls to help you to de-interlace the footage, determine field order, detect motion and remove artifacts. Frames also contains Broadcast Spec, a filter that helps you ensure that the colors in your footage are within the acceptable ranges for broadcast. Letterboxer is a simple but useful plug-in that does just what its name implies: it letterboxes footage via presets with common aspect ratios such as Academy, Super 16, Widescreen TV, Theatrical, Anamorphic and Ultra. Opticals is a tool that can create cross dissolves, fades, and "burns" in a much more filmic way as opposed to the linear way that computers usually handle them. Magic Bullet Frames also creates a tool called Deartifacter that can help remove digital video artifacts resulting from compression.

Instant HD

Instant HD is a plug-in that helps scale up footage to HD resolution. SD resolution can be effectively scaled up and still look very good. The plug in contains many different preset HD sizes, but you can also dial in your own custom sizes. Instant HD also contains controls for sharpening, quality level and anti-aliasing. Obviously, this is more than a simple scaling. A result of Red Giant's acquisition of Digital Anarchy, they have now integrated their Resizer plug-in as part of the Instant HD package. The filter offers great speed and similar results to Instant HD.

MisFire

Also included in the Magic Bullet Suite in MisFire, a collection of film damage looks you can apply to your footage. These include scratches, vignettes, dust, splotches, flicker, grain, fading, gate weave and something they call Funk. Useful if you want to give your footage that old celluloid look.

Conclusion

Red Giant's Magic Bullet Suite v11.3 contains sophisticated color correction tools with Colorista II, Looks and Mojo. Denoiser can save a shoot with its ability to take unsightly noise out of your footage and can do wonders for footage that you may have thought about discarding altogether. The other plug-ins in the suite are also a valuable addition to your toolbox More than just a suite of useful plug-ins to dip into, v11.3 of the Magic Bullet Suite provides an entire color grading ecosystem that will expand your everyday work with After Effects and NLEs. Well worth it to add to your tool chest or an upgrade if you already have it. Be sure to check out Red Giant's great website for more information as well as many free training resources.

Original article: Useful Tools: Our Review of Red Giant’s Magic Bullet Suite

©2013 NYC Production & Post News. All Rights Reserved.

A Look at a Versatile Encoding Toolkit – Sorenson Squeeze 8.5

$
0
0

SqueezePostImage

A Look at a Versatile Encoding Toolkit - Sorenson Squeeze 8.5

This isn’t news: Video is everywhere these days. If you’re a creative, it’s also more important than ever to control and manipulate it throughout its life cycle. Whether you catch your favorite episodes on a TV, or turn to computers, tablets and mobile phones, you’ll find endless amounts of video for your entertainment, amusement or edification. Video’s importance has grown too. YouTube is said to be the second most popular place after web search engines to look for information. As a content creative, you realize that straightforward, potent tools are crucial to the last stage of production: encoding video content for today's many devices and distribution portals. That’s where Squeeze, a familiar toolset from video encoding mainstay Sorenson Media, shows that it’s more relevant than ever. Version 8.5 of Squeeze continues to grow in capability to match the continuing growth of video. When combined with Sorenson 360 Online, the cloud-based sharing, review and approval service for video professionals, things get even more interesting.

What's new?

First off, you will notice a speed increase in Squeeze 8.5. Thanks to a re-architecting of Squeeze's encoding engine, you’ll benefit from optimization for major codecs such as MP4, QuickTime (MOV), WebM, and Matroska (MKV). The re-designed encoding engine also takes advantage of parallel processing techniques, distributing chunks of video across multiple CPUs. According to benchmark tests, the re-architected Squeeze 8.5 handles compression some 200 percent faster than the prior version Squeeze 8. To garner even more throughput speed, Sorenson worked with Intel engineers to optimize Squeeze 8.5 for Intel's Quick Sync platform and codec. This allows Squeeze to benefit from the improved architectures of Intel's second generation (Sandy Bridge) and third Generation (Ivy Bridge) processors. The result? An effective doubling of speed gained from Squeeze's re-architected video encoders. In my own tests, I was impressed by the results. Encoding was snappy while quality held up. That might be because Squeeze also employs MainConcept's MP4 codec technology, highly regarded in the industry for its look and compression chops. The new version of Squeeze also delivers faster Adaptive Bitrate encoding. That’s the technique of creating multiple versions of your encoded videos at different bitrates. The ISP’s server will offer up a lower bitrate version of your video so things don’t drag along. Faster connections benefit from versions you’ve encoded with a higher bitrate. Controlling these actions also got easier via a new slider bar that lets you throttle the amount of CPU power dedicated to encoding simply by sliding it around, allowing up to 100 percent of your system's resources to be delivered to accomplishing the job.

Sorenson 360

Being able to show your videos to other people, whether they be collaborators or clients is important. You'll also want the ability for them to comment on your videos and offer feedback. Sorenson Media’s Sorenson 360, introduced some three years ago, is a professional online service that allows you to collaborate easily. It’s seamlessly integrated with Squeeze 8.5. Now, new in Squeeze 8.5, users are treated to 5GB of free permanent storage within Sorenson 360. That means you can use Squeeze to encode your videos and immediately push them up into Sorenson 360 for review and approval. Naturally, this is much easier and more effective than the time consuming process of burning DVDs or even sending them through the web through file sharing software like Dropbox or YouSendIt (those services aren’t optimized for video sharing across multiple screens, nor do they provide the ability to leave comments about the video). I was very impressed by the thought that’s being put into Sorenson 360. Of course you can publish videos of any length and password-protect them. You can access full metrics so that you can track how many times your videos are being watched (as well as the duration viewed). Want to add closed captioning to your video? Simply upload a separate XML file. Sorenson 360 will also display the complete specs about your videos including which codec is used, the data rate, frame size and the audio codec. That’s info that can come in handy, something you might have used a separate app like Media Inspector to suss out. For a busy professional, the cloud-based app allows them to save time by creating a Review and Approval request to someone from within the website; there’s no need to pop back over to your email client. Smartphones get into the act too, since Sorenson 360 can deliver an SMS text message. As a simple test, I made a contact for myself and sent a review and approval request with a click of a button. Immediately, a message showed up in my mailbox with a link to the video on Sorenson 360. Upon clicking the link, I was directed to a Review and Approval page, which contained a large version of my video as well as an area to write comments and note whether it was approved or should be revised.

Other features

There's much more to like in Sorenson Squeeze 8.5. For starters, I liked the design and intuitive feel of the user interface. Squeeze takes advantage of Nvidia’s CUDA technology, so if you have a Quadro graphics card, you’ll enjoy a considerable bump up in encoding speed for H.264 compression. Don’t like to spend the time setting up the considerable number of settings used for encoding? Squeeze 8.5 comes with many pre-defined intelligent presets which you can use to quickly define the correct settings for your encode. Presets are conveniently arranged in different tabs such Broadcast, Devices, Disks, Editing, Transcoding, Favorites and Web. Of course you can always define your own presets or get them from Sorenson's Preset Exchange, an online repository of presets uploaded by other Squeeze users. Once you select the preset you want to use, you simply drag and drop into onto a job module where you cue up all the different sources encodes and destinations for a current job. Also available in Squeeze 8.5 are a number of different filter options which you can apply during your encodes. These filters include Gamma, HSL, Tint, telecine (add pulldown), inverse telecine, white balance, sharpen, timecode, de-interlace, contrast, watermarking and more. You’ll also find a number of audio filters as well. Squeeze's publishing tab stores all the various destinations for your encoding jobs, and comes with a handful of predefined ones. Besides setting up encode folders on your hard drive or you can choose among direct connections to Sorenson 360, Akamai, Amazon S3, Vimeo and YouTube. Sorenson Squeeze 8.5 offers some basic editing such as cropping, and the setting of in and out points in your timeline. One handy section creates a split screen effect that shows the source on one half while the other half displays the final result after applying different options and filters.

Conclusion

If you are a postproduction professional and do a lot of encoding to different formats and devices, Sorensen Squeeze 8.5 will make your life a lot easier. Having everything in one convenient and accessible place is a big time-saver. Meanwhile the ability to upload your videos to different destinations right from within the program is very nice. The streamlined interface helps keep things running smoothly and efficiently. In addition, Sorenson 360 is a highly developed and useful cloud tool that has the potential to become the preferred way that you share your work with your collaborators and clients. You can download a free trial version of Sorenson Squeeze 8.5 on their website.

Original article: A Look at a Versatile Encoding Toolkit – Sorenson Squeeze 8.5

©2013 NYC Production & Post News. All Rights Reserved.

Virtual Sets the Easy Way: Joe Herman Reviews Intensikey

$
0
0

IntensikeyPost

Virtual Sets the Easy Way: Joe Herman Reviews Intensikey

Running your own virtual studio just got a lot cheaper, not to say easier. Virtual studios aren’t new, of course. If you’re a network or large production company you can sink some cash into slick, real-time systems from Vizrt and Brainstorm, for example. Someone with a TriCaster rig, meanwhile, can get by for less via NewTek’s $1000 Virtual Set Editor. But Intensikey will only set you back $299 for the HD version. Introduced at this year's NAB, Intensikey certainly makes the whole process of setting up and creating virtual sets a lot easier. What exactly is a virtual set? In simple terms, a virtual set is a 3D environment complete with lights, materials/textures, props and cameras created in a 3D modeling program. Your talent? Simply shoot them in front of a green screen. After keying out the green, you can place them inside the virtual set and create camera moves with insensitkey's virtual camera while they speak. To those who are familiar with what a virtual set is, you already know how cost effective and practical this solution is. True, you could create a virtual set from scratch in a 3D program, but if you do you’ll have to learn a complicated 3D program (no easy feat), design a cool-looking set, texture it, light it, set up a camera and composite it together with a compositing system. If you've mastered all of that, it will probably look pretty good. If you're not quite on that level, it's a different story. True, you could just hire a set designer, 3D animator and a compositor to do it for you, but that’s also time consuming, not to speak of expensive. Let’s see how Intensikey’s software-only approach works.

Virtual Sets Made Easy

Here's how a virtual set works using Intensikey. First you shoot your actor on a green screen. You don't need a large green screen stage, since you'll pretty much want him or her standing in one place (you could get away with the talent taking a step to the left or right). As far as the arms, torso and heads are concerned, go ahead and move them around as much as you want. Regarding that stage: it’s easiest just to rent out a small green screen studio for the day. But if you're like me, someone who takes a do-it-yourself approach whenever possible, you could just put one together in your office or suite. All you need is some green fabric, some lights and of course a camera. Check out YouTube and Vimeo for videos that get into more detail on how to do this. Once you've shot the actor's performance, it's time to start using Intensikey. To begin, you first choose a set from among the many different ones Intensikey ships with. Considering how inexpensive this app is, I was impressed by the quality of the virtual set designs that ship with the system. Some sets are suitable for presentations, others seem better for news-type shows. Intensikey also offers other virtual sets for sale on their website, so you can shop around for the exact look you want. Currently there are 80 sets available in addition to the ones that come with the program. More are promised from Virtualsetworks, Intensikey's sister company. If you're doing a lot of virtual set work, you might want to pick up the Virtual Set Pack Volume 1 with 20 sets for $499. Individual sets are $99 each. Here's a nice touch: customers can also get customized sets directly from intensiKey by request.

Importing the Green Screen Footage

Once you select the set you want, it's time to bring in your green screen footage. No need to pre-key it since Intensikey has keying functionality built in. I used some demo green screen footage for my review that was shot horizontally, that is to say that the camera was rotated 90 degrees when shooting. This trick makes sure the camera’s maximum resolution was used. It’s easy to flip it to the correct position during the post process. The green screen footage is applied to a video plane, which, prior to import, looks like a big white screen in your scene. One nice feature: the video plane is not locked to a particular position, so you can move it around the set and place it where you want, even behind furniture such as desks or consoles. Intensikey provides useful keying tools for keying out your green screen footage without having to rely on another software package. Aside from a threshold level and range, other controls act to soften the matte to remove aliased edges. You might also use the shrink matte control (helps remove any green fringe) or the suppression control thats helps to remove green spill on the actor. Those experienced working with chroma keying may prefer to use another program for the keying such as After Effects or Silhouette. These give more control as they provide masks and other rotoscoping options. However, I didn't see a way to bring in pre-keyed or matted footage inside of Intensikey. This would be a useful feature, so I would be surprised if future versions do not include it. Until then, unless your your green screen footage has serious problems, Intensikey's built in keying tools will most likely do the job nicely. Aside from the keying tools, you can work a slider to set the volume level of the actor and a scale control to scale the footage up and down within the video plane. If you are looking for added realism, you can set the system to create shadows under your actor. Intensikey does a pretty nice job of creating a believable shadow, and you can tweak it further to control the shadow's opacity, X offset, Y offset, skew, blur and Y scale. Shadows are also real time, so there's no penalty to use them.

Importing Videos and Still Images

When working with a virtual set, no doubt you'll want to customize it to make it more relevant to what it is you're producing. For example, you might want to have a video playing on a screen somewhere in the set. Or you might want to have your logo affixed to the front of a console. Besides the video/image plane used for the actor, Intensikey's sets offer planes built into the set that are placed to seem like video screens or signs. Make them come alive by importing video files or still images. If it’s a video, you can adjust the volume or loop the playback so they repeat.

Getting animated

Once you have imported the green screen footage of your actor, placed it where you want, and added videos and images in the background, you can liven things up even more by moving the camera around. For example, you can start with the camera far away and have it dolly into the actor for a medium close up while he is speaking followed by a tracking move to the left or right. Of course you’re doing just what you might do on set: give a presentation a more dynamic feel as the perspective shifts as a result of the camera’s motion. If you shot your footage in HD with a high quality camera, you can even get very close to your actor without losing quality. Extreme close ups may take a hit however, unless you planned for them in advanced by shooting your actor very close up. There's a timeline on the bottom of the interface similar to those found in programs such as Maya and Cinema 4D. Keyframing is straightforward as well, simply move the time marker to a new frame and move the camera, and your keyframes are created automatically.

Rendering

After you’ve set up your actors, customized the set and animated the camera, you’ll want to render out the final video. Several different presets allow you to choose from the most common HD and SD resolutions. File formats include MP4, AVI, MOV, WMV, F4V, MPEG-2 or as an image sequence. Working within Intensikey is fast, especially when moving around the set. Things happen in real time, even though the set might contain a fair bit of geometry, large amounts of texture and a number of lights, not to speak of multiple video planes. I did wonder, however, what the final render would look like. While the real time display gives you a close approximation of what the rendered output will look like, it is not final quality. Nor should it be. The app is designed to keep things moving at a fast clip while you are working, which keeps you involved. After rendering a final test to an HD file, however, it all looks good. Edges and lines as well as detailed textures were nicely anti-aliased and smooth. Better than the real time display, and good enough for prime time. The app also renders fast. I'm not sure what technology is under the hood, but there is no question that some advanced GPU handling is involved.

Conclusion

I’m impressed. Intensikey is not only an inexpensive and useful tool, but it is easy to learn. You should be able to get the hang of it in a day or two. It makes the whole process of setting up and using virtual sets really simple. Intensikey runs on Windows. For best real-time results, make sure you have one of the newer video cards from NVIDIA ors AMD. More information, tutorial videos and pricing details can be found on the Intensikey website.

This review was conducted on an HP Z820 workstation, a great system for high-end post production.

Original article: Virtual Sets the Easy Way: Joe Herman Reviews Intensikey

©2013 NYC Production & Post News. All Rights Reserved.

Joe Herman Reviews Maya 2013

$
0
0

MayaPostImage

Joe Herman Reviews Maya 2013

In its regular upgrades to their Entertainment Creation Suite family of products, Autodesk has offered various new tools and updates that continue to make the company's media and entertainment industry products among the leading apps in the market. The animation apps in the suite, Maya, 3ds Max and Softimage, are already proven, workaday 3D toolsets. When you take a look at everything they've included within Autodesk Creation Suite Ultimate 2013, you'll be struck about the amount of effort that's gone into refining its feature set. It's effort well spent, delivering better interoperability while punching up the performance. Besides the three major apps we've mentioned, Creation Suite Ultimate 2013 also contains Mudbox (Autodesk’s innovative sculpting and painting tool) as well as MotionBuilder (a potent skeletal animation and motion capture system). One more product to note in the package: Autodesk Sketchbook Designer. This 2D painting, drawing and illustration application gives you professional sketching capabilities as well as vector tools. As you can imagine, any reviewer would be pressed to fit everything important about this latest, largest Creation Suite into one review. So we'll be we’ll concentrating on Maya here. Look for a follow up review that takes a closer look at Mudbox, the 3D sculpting and painting tool, and more.

3D choices

Among the three industrial strength 3D packages in the Autodesk Entertainment Creation Suite Ultimate, 3ds Max is a favorite of many 3D animators, character designers and game studios. Architects use it to create visualizations and concepts for their upcoming projects. Softimage, meanwhile, has a rich legacy as one of the first 3D animation programs used in Hollywood. Many enjoy its node-based ICE visual programming system, finding it much easier to work with compared to scripting. Maya, however, has the most penetration in high-end feature animation studios in Hollywood and beyond. You’ll also find it in the credits of many of the big blockbuster films. If you’re wondering which animation package you should use, the simple answer is: Choose the one you like. If you are accustomed to using Max, stay with it. Like Softimage’s approach? No reason to change. Your new job uses Maya? No brainer. In other words, it’s up to you. For those just starting to learn 3D animation, find out where you want to work. See what apps they are using, and then learn that one. Out of these big three, however, it’s a safe bet to start with the one that seems to pop up on most job ads. And that’s Maya .

Dynamics with nHair and Bullet

Let's start by taking a look at the new developments in dynamics in Maya. Maya’s Nucleus Dynamics Simulation Framework, or ndynamics, is a unified system that simulates and solves physics-based interactions between objects. Nucleus itself isn’t new in Maya. To date, its modules included nCloth (used for drapery and clothes), nParticles (water, sparks, weather) and collision objects (geometry that work as colliders in physics simulations). Now add nHair to the list. This new dynamic hair system gives you another element that can interact and collide with the all the other modules in the Nucleus ecosystem. nHair even offers self-collision between its hair strands. Now animators can create complex simulations between all of these dynamic entities which all work together. nHair also has other advantages over Maya’s previous hair system. Performance is improved, allowing you to work with much more hair and control the action via nConstraints to create limited actions between Nucleus objects. A useful feature is nCaching, which stores the solved data from nHair simulations in an nCache file. This allows you to play back the simulation quickly without having to recalculate it every time. Speaking of dynamic simulations, Maya 2013 now includes the Maya Bullet physics simulation plug-in. Created from the popular open-source AMD Bullet physics library, Bullet is a comprehensive physics engine and allows you to create very realistic rigid and soft body dynamics simulations, even at very large-scale, as well as cloth, rope, ragdoll skeletons and other deformable objects. Bullet is rapidly being adopted throughout the industry; Maxon’s Cinema 4D incorporates it, for example. The addition of the OpenCL accelerated Bullet physics engine into Maya provides the app with a powerful integrated dynamic system, very useful in a wide range of scenarios that require complex physical interactions without the need for manual keyframes.

New animation features

In Maya 2013, there’s a new and powerful option that allows you to establish a live connection between Maya and MotionBuilder, Autodesk’s skeletal animation environment that is specifically tuned for working with motion capture data. MotionBuilder also allows you keyframe over mocap data as well as keyframe from scratch. However, if you are not using mocap, many prefer to stay inside Maya to animate. MotionBuilder is used extensively in the industry to record motion capture directly from mocap hardware. With the new Live Connection window between MotionBuilder and Maya, you can set up a live streaming connection using a Human IK defined character between the two apps. By doing so, you can refine your animation or motion capture inside of MotionBuilder and immediately see how it looks inside of your Maya Scene without the need to bake it into Maya permanently. Maya 2013 also includes improved importing and exporting using the ATOM file format. (ATOM stands for Animation Transfer Object Model; don’t confuse it with the RSS-style feed.) Suppose you have created animation for a character and later decide that you would like to use it on another character. Or perhaps you would like to share it with others to use in their productions. The ATOM format allows you to move constraints, animation layers, keyframes and set driven keys to others using a variety of apps. Maya’s Trax Editor has also been upgraded to include a feature known as Clip Matching. You can think of the Trax Editor as a sort of non-linear editor for animation sequences. For example, suppose you have a sequence of a character skipping. Another sequence has him or her jumping up and down. You could lay these clips in the Trax editor and offset their time, speed them up or slow them down, similar to how you would handle it in an editing program. The challenge with combining clips is to make the transitions between them smooth and natural. That’s where the new clip matching tools in Trax come in. Clip Ghosts allow you to view the start and end frames of clips as skeletal wireframes. With the help of these visual cues, the clips can be manually matched with each other by translating and rotating them and subsequently blending them together. Maya software’s Graph Editor adds a handy new function called the ReTime Tool. This enables you to interactively adjust the timings of key movements in your animation by slipping and sliding its controls in the graph editor. The ReTime tool makes it a lot easier than moving individual keyframes.

Alembic

Supporting the new Open Data initiative, Maya can now read, write and play back the Alembic open computer graphics interchange format co-developed by Sony Pictures Imageworks and ILM. Alembic lets you share geometric data quickly and easily, regardless of the app you’re using, including Maya, Houdini, and Renderman. The Alembic format is also extensible, so it can store other information as well. One of the most impressive aspects of Alembic is how efficient it is passing massive datasets around. During a demonstration at Siggraph 2011, Rob Bredow, CTO of Sony pictures Imageworks, demonstrated how a complex 217-frame scene from the Smurfs movie consumed 87 GB of disk space when saved in OBJ format. By contrast, when saved as a single Alembic file, the same shot consumed a mere 173 MB. That’s a 99.8 percent disk space savings. The reaction to Alembic by the 3D community has been very positive, as you can imagine. This open-source technology may in fact become the standard method to share 3D scenes going forward, regardless of the platform or application. The 2013 version of Maya, also features better interoperability with 3ds Max 2013. Maya now includes a new ‘Send To’ 3ds Max command in the file menu. By invoking it, you can send things like geometry, animation, materials and textures from Maya to 3ds Max. This should make it a lot easier for users of the two different applications to collaborate as well as share assets and model libraries. There’s a new and improved Node Editor in Maya as well, with three levels of detail that helps artists and TDs easily create, edit and debug node networks. The editor combines features from the old Hypergraph and Hypershade node editors into a unified and elegant interface. The new Node Editor allows you to pipe inputs and outputs from various nodes (such as objects, shaders and textures) into each other allowing you to drive the attributes of one node with the outputs from another. Creating new nodal functions, such as math operators or other specialized nodes is simple. Just click on a blank area in the node editor and start typing the name of the node. No need to have name of the node memorized, since Maya will automatically list matching names as you type. Simply select the name of the node you want to create when you see it. The new node editor in Maya seems well designed and robust, and a major improvement over previous versions. It also lets you create bookmarks as you layout your nodes so you can easily return to previous graph layouts. Enhancements to Maya’s outliner includes a new Reference Node which makes it easy to locate and identify all the file references in your scene. By using file references, you can continue to work with an object, even as another member of your team works to refine it. Referenced files can also have animation attached to them. In Maya 2013, you can now edit animation curves from file references inside of your Maya Project. When you’re done, you can export the updates to the offline file.

Rigging

Maya has unified its character tools, letting you set up your character in multiple ways — all in the same window. Previously the HumanIK tools were independent of other character controls. Not anymore. Now HumanIK tools appear as tabs in a new consolidated controls window. This will make character rigging a lot simpler. To start setting up a character’s rig, just select options from the Start pane. This is painless to do, since it guides you through the whole process of creating a new HumanIK skeleton or a control rig. Another very nice new rigging feature is heat map skinning, useful when binding a character’s mesh to a skeleton and requiring less manual refinement than past methods. Previously, vertices would be weighted based on their proximity to adjacent bones. This would often lead to unwanted results when vertices would be assigned to nearby but unrelated bones. For example, the vertices on the left shin might get weighted to, not only by the left shin bone, but to the right shin bone as well due to their closeness. These sorts of problems were also common on other places such as shoulders and elbows. Of course such problems could be fixed during the weight painting process, but that is another time-consuming step. Heat map skinning gets you off to a much faster start and may get you most of the way there, thereby reducing the time spent painting weights.

Modeling

Modeling enhancements to Maya 2013 include an improved extrude tool; it now allows more precision during extruding by providing thickness, offset and division values. The Sculpt Geometry tool now enables you to create more pronounced deformations with a new brush strength slider and pinch algorithm that provides smoother results than before.

Other enhancements

Viewport 2.0 now supports HumanIK, joints, motion paths, ghosting and an improved playblast, which now supports H.264 QuickTime output. In addition, there’s better support for audio and multi-track audio. Two new multi-render passes are in the package: UV pass and world position pass. By converting UV values to Red and Green values, the UV pass allows you to replace textures in 3D renderings in post without having to track them in place. The world position pass meanwhile converts X, Y, and Z position coordinates to RGB values, allowing you to relight your scenes during compositing. Even Maya’s Help system got a bit of attention. Now the search function not only pops up results from Maya’s Help documentation, but you’ll also find solutions from other websites such as Autodesk’s YouTube channels and forums.

Conclusion

The Autodesk Entertainment Creation Suite 2013 contains many new enhancements. We’ve touched on some that will make your work easier and open up new creative possibilities. We could only discuss Maya here—this review would go on to many more thousands of words otherwise--but really, every app in the suite has its own list of new and worthwhile features. For a international company such as Autodesk, you wouldn’t expect anything less. With the improvements to all three of Autodesk’s professional 3D animation programs, as well as all the good things in the latest Mudbox, MotionBuilder and Sketchbook, Autodesk’s Entertainment Creation Suite 2013 Ultimate is a notable and worthwhile upgrade to a key tool of our industry. The price? A new copy of the Autodesk Entertainment Creation Suite 2013 Ultimate will set you back $7995. (To make the tab a little less painful, on its website Autodesk helpfully adds that this is a “$15,220 value.”) As its name implies, Ultimate has the most options, tossing in all of the apps in the other Creation Suites. Maybe you don’t need all of that, so check out the Standard and Premium versions too. Check out Autodesk’s website for more information. Don't miss the free 30-day trial download of Autodesk Entertainment Creation Suite 2013 for you to try out yourself.

Original article: Joe Herman Reviews Maya 2013

©2013 NYC Production & Post News. All Rights Reserved.

A Look at the GoPro Hero 2 HD Camera

$
0
0
The GoPro Hero 2 camera is a versatile camera that is capable of shooting HD video in a small form factor. Perfect for capturing intense action, sports footage or for tricky stunts when you don't want to risk damaging your high-end camera rig. This wearable camera includes some rigging options, but there are plenty other third party options you can employ. Key points include ultra-wide 170 degree video at 1080p, 11 megapixel stills, time-lapse and burst shooting at 10fps. Slow down action by 4X with its 120fps mode at 848x480 pixels. HDMI and external microphone ports add to the camera’s flexibility, but probably the most intriguing addition over the previous two models is the Wi-Fi BacPac accessory back. Along with the Wi-Fi remote, you can now control up to 50 HD Hero 2 cameras via a smartphone. You can even view their feeds on a smartphone or tablet screen. Durability is a given, not surprising for a camera that came to fame for its use by skateboarders, skiers, and extreme sports enthusiasts. The sealed housing can submerge to a depth of 60 meters, so using it while scuba diving and snorkeling is also an option. The camera offers up a minimal interface with only two buttons: a mode button on the front and a shutter button on the top. The camera powers-up via a prolonged press of the mode button; that same button allows you to scroll through the various shooting modes and options. Like its predecessor, the HD Hero 2 is fully automatic for exposures. Reports say it’s much better in handling quick changes in lighting conditions, not lagging as in the previous models. The HD Hero 2 shoots 1080p with a 170 degree field; it also offers two narrower field of views at 1080p: 127 or 90 degrees, although these seem to have increased noise. 1080p footage is captured at 30fps and encoded with H.264 into an MP4 file. Other configurations include 960p (4:3) at 30 or 48fps, 720p at 30 or 60fps, or WVGA (848x480) at 60 or, as mentioned earlier, at 120fps. The camera features a time-lapse mode. Images can be taken every .5, 1, 2, 10, 30 or 60 seconds. You can then stitch them together in QuickTime Pro or an NLE. There’s no option to see what you’re shooting with the standard camera, but an optional LCD BacPac accessory bolts to the back. You can also set up a 3D system by using a housing that fits two of the cameras side by side. Granted this isn’t a system that allows the requisite control of a standard pro setup, but it can still be useful in its way to explore 3D in an action setting. For those planning to do 3D shooting, you will want to download the free GoPro CineForm Studio Software which takes footage captured with the 3D HERO system and converts it into viewable 3D files. It then allows you to export the files and watch them on your computer or 3D TV, as well as 3D capable websites like YouTube. Another nice thing the free vesion of GoPro Cineform Studio allows you to do is stitch together a timelapse instead of (or in addition to) using Quicktime Pro and an NLE. We’ll take a closer look at the GoPro camera when the new Protune firmware upgrade ships sometime this Fall. Developed in partnership with Technicolor and announced at NAB 2012, the effort will embed Technicolor’s flat CineStyle color profile into the Hero 2. Key features will include 24fps frame rate, 35Mbps data rate, neutral color profile, log curve encoding for more detail in shadows and highlights, and reduced sharpening and noise reduction. A great thing about the Protune app is that it will be absolutely free. When it ships, the GoPro Hero 2 will become one of the best $300 action capture solutions available. Check out GoPro’s Tim Bucklin discussing Protune in this video we created at NAB 2012. http://vimeo.com/45217628

Original article: A Look at the GoPro Hero 2 HD Camera

©2013 NYC Production & Post News. All Rights Reserved.

A Review of Adobe Premiere Pro CS6

$
0
0

CS6Interface

Recently, I reviewed Adobe Premiere Pro Creative Suite 6 (CS6) for the summer issue of CineMontage magazine, the official publication of the Motion Picture Editor's Guild. Aside from the print version, the Guild publishes the contents of CineMontage on their website, so even if you don't receive the magazine, you can read the review by following this link. In the article, I touch upon Adobe's Creative Cloud distribution strategy, a radically new way Adobe has chosen to distribute their software which does away with the era of cardboard, DVDs and thick manuals. Okay, so maybe Apple is doing that already. But Adobe Creative Cloud also features a "pay as you go" subscription model, allowing people to get the software now without having to shell out fat amounts of cash first. I also speak a little about some of the other enhanced apps in CS6 (such as After Effects and SpeedGrade), before delving into the new features of Premiere Pro itself. We like a host of the new features, including editing with the keyboard, improved user interface, selectable trim points, uninterrupted playback, the new audio mixer and more. Throughout the industry, many are realizing the power of Premiere Pro as a workaday NLE. This review explains some of the reasons why.

Original article: A Review of Adobe Premiere Pro CS6

©2013 NYC Production & Post News. All Rights Reserved.


Blackmagic Does It Right: DaVinci Resolve 9 Review

$
0
0

Resolve9Post

Blackmagic Does It Right: <br />DaVinci Resolve 9 Review

Color me impressed. With the release of DaVinci Resolve 9, Blackmagic Design not only delivers a major upgrade to a color-grading workhorse, but also—with some heavy lifting from the latest series of GPUs—offers up a real-time 4K dailies machine. Resolve 9’s dramatically redesigned user interface provides a much friendlier and more intuitive way of working, removing much of the confusion and clutter found in previous versions. The result? A more up to date app that is a lot easier to use, yet deep enough for the most challenging jobs. No surprise. Blackmagic Design (BMD) has become known for its user friendly approach that turns high-end gear into slick products, ones that look good, are accessible and affordable.

A Fine Manual

Before we dive into the most important new features in Resolve 9, let’s take a moment to talk about something many other companies seem to forget about these days: a good manual. Looking like it’s been completely rewritten, the manual now tops out at a thorough 600 pages full with tastefully handled graphic design and loads of helpful illustrations. Why all that work to rewrite? It just might be because this latest version of the program will go out to many newbies too, since it’s bundled with BMD's new Cinema Camera. (The camera outputs in ProRes and RAW, the latter requiring extensive color correction.) The new manual thus doesn’t take it for granted that the reader is a color suite veteran, but starts out with an intro to color correction’s fundamental concepts. An in-depth tutorial covers a remarkable amount of ground on how to work with Resolve, including how to import a project, using Resolve’s core grading tools and then rendering it all out. Hundreds of concisely written pages of in-depth reference about every facet of Resolve follow. Nice job.

User Interface

First off, it’s much easier to get started and get working in Resolve. A much-simplified log in process lets you configure users without a complicated set up window; you won’t need a database anymore either to get going. In previous versions of Resolve, much of its functionality was scattered throughout the application, with settings that could be confusing to find, or simply out of place. In Version 9, this has been cleared up with a simpler and more logical interface that is workflow driven. How? Resolve’s interface is now organized into five “pages” or sections. These are accessible by clicking on five icons towards the bottom of the screen, namely the Media page, Conform page, Color page, Gallery page and Delivery page.

Media Page

The Media Page is where you import media into Resolve, whether it’s coming from a hard drive or from an SSD at a shoot. Note that Resolve 9 now automatically recognizes any devices connected to your system. Previously, adding devices to your laptop or workstation meant you had to restart the program. Imported clips now feature ‘hover scrubbing’ over their icons, a feature found throughout Resolve, so now you know exactly what’s in each one. Meanwhile an audio meter displays an impressive 16 tracks of audio per clip. In Resolve 9, your audio tracks are now carried through the whole process until delivery. There is improved support for handling a wider range of camera formats. You’ll find preset RAW settings for RED, ARRIRAW, and Sony’s F65. Of course, there are presets for the Blackmagic Cinema Camera, but also for less common cameras like the Phantom, GoPro, and Canon C300. A nice touch: Resolve 9 will automatically support any alpha channels in imported media, saving the need to build an alpha composite.

The Conform Page

The Conform Page is next. This is where you take in edited timelines from a variety of NLEs including Avid Media Composer, Adobe Premiere Pro and Apple Final Cut that use such formats as EDL, AAF and XML. The Conform page also displays all the different timelines in the current project. Here’s a feature you’re sure to like: Resolve 9 now allows you to import timelines with mixed frame rates. Prior versions only supported one frame rate per project. You can, however, render everything out at a constant frame rate later if desired.

The Color Page

The Color Page is where most of the revisions in Resolve 9 have been made. And that makes sense, since this is where you’ll spend most of your time on color correction and image adjustment. In previous versions, many of the functions found on the Color page were located in different locations in the app. Now, everything related to what you need for color correcting is found in a variety of palettes on this page. Aside from the 3-way lift, gamma and gain controls that are standard in many color grading applications, Resolve 9 now has a new process to adjust color in the color wheels palette. Called the log mode, this approach gives you more control of specific tonal ranges--shadow, midtone, highlight and offset-- allowing you to do things like push the highlighted areas towards yellow, for example, while pushing the shadows towards blue. The offset control allows you to offset all the colors at once. You will also appreciate the new ability to pop out and enlarge the color curves palette while adjusting them. Previously the color channel curves were a little on the small side. By making them bigger, it’s easier to make finer and more precise adjustments. When you’re done, simply shrink them back down. Another useful new feature in Resolve 9 is the fact that you can now name your color correction nodes. Previously, the only way to tell what a node was doing was by opening it up and seeing what was in there. Now you can give it a descriptive name like “hair correction” or “John’s suggestion” so you can understand what effect it’s having. Version 9 of Resolve, of course, also offers up a variety of methods to measure color values. Waveform, parade, vectorscope and histogram tools can be viewed together, or each one can be enlarged and viewed separately if you like to keep it simple. On the Color Page, you’ll also find the tracking tools. Resolve has always had a phenomenal tracker, allowing you to do things like track a mask (otherwise known as a Power Window) around a character’s face, for example. However now there is a tracker graph, which allows you access to the tracker data. With the tracker graph, you can nudge a track here or there if you really need to. By the way, Dynamics are now referred to as Keyframes, again a less proprietary, more industry standard term that in the end is less confusing to color correctors who haven’t spent time with Resolve. One useful enhancement new in Resolve 9 is the ability to flag clips and filter the timeline. Here’s an example why: Suppose you are working on an epic movie with hundreds and hundreds of clips. That’s a lot of clips to keep track of, but in past versions, you needed to keep track of everything manually. By having the ability to flag clips with colors, you make the process of subsequently finding and isolating clips much easier. For example, you might use red to flag all the clips that will need to be rendered out. Need to track the visual effects shots that you are still waiting to receive from your VFX house? Flag those in blue. Want to flag all the dancing scenes? You could do that too. Later, you might filter the timeline to show only the clips that are flagged, depending on what you want to work on at that point, or send to render. You can also filter the timeline to show things like graded clips only, ungraded clips or clips that have been modified within a fixed period of time. Resolve 9 adds another nice new feature that makes it easier to find a clip in your timeline: Lightbox View. Traditionally, the timeline comes up in a horizontal row of thumbnails, each of which represents a separate clip. Finding a specific clip was difficult if you had a timeline with hundreds of shots. Now, with Lightbox View, the timeline expands to fill the entire screen with clip thumbnails filling multiple rows and columns. A simple enough idea, but this makes it a lot easier to quickly find what you are looking for.

The Gallery Page

Use the Gallery Page to do just that: create a gallery to that allows you to manage all of your looks and still frames in one place. When you store a still from a sequence, the system also visually saves all the various grades associated with it. The Gallery Page also contains the DaVinci Resolve Looks collection. All in all, a good way to manage and share your stills and grades among multiple projects. BMD again moves towards offering increased usability and a faster learning curve for Resolve 9 users via its collection of presets, which you can customize as you like. Some presets create the latest trendy looks from features and commercials, while others emulate specific film stocks. Experiment with them to develop a new look or use them as a stepping off point for your own grades. Opening up once complicated operations to new users is just part of Blackmagic Design’s DNA.

Delivery Page

The Delivery Page is the final page, where you set up your renders. Rendering out might be part of a round trip, for a handing over dailies to the editor, or when mastering the entire project. The setup of the new Delivery Page is a lot easier to comprehend than in earlier versions. Want to set up multiple jobs, each capable of having multiple delivery formats? Not a problem. For example, you might designate specific, separate deliverables for your editor, for playout on a mobile device or tablet, or posting on Vimeo. There’s also a new checkbox when rendering that allows you to move among different levels of debayering of the RAW footage. How might you use that? For example, depending upon the system you are working on, you might want to debayer your footage to a lower level in order to keep things moving along. When it’s time for the final render, change it to the highest quality. By doing this you save time while color grading, yet still maintain the highest quality results. For very tight control, each clip can even have its own level of debayering.

Concluding Thoughts

Like I said at the beginning, I’m impressed. Version 9 of Blackmagic Design’s DaVinci Resolve is a major upgrade to a color-grading powerhouse. If you’ve worked with previous versions, or other color correction apps, you’ll find Version 9 contains many useful new features. And if you have it as part of your purchase of the Blackmagic Design Cinema Camera, I'll bet you’ll like the completely redesigned UI that is faster to use and easier to learn. As mentioned earlier, Resolve 9 is also a great platform for dailies delivery. The ability to handle more audio tracks make it useful here, since you're making all of it available well before the sound department gets a hold of it. But the app's sheer speed will probably be more enticing. You can check the web for news reports that have BMD CEO Grant Petty noting that using only one of the two potent Kepler GPUs found in Nvidia’s new K5000 card allows Resolve users to “work in real-time with 4K imagery”. If you are planning to get serious about being a colorist, you may wish to get Blackmagic’s cool control surface for Resolve, or you may opt for a less expensive one manufactured by a third party. Of course nothing keeps you from simply using a mouse if color grading isn't your full time occupation. Still not sure? Try Resolve’s free Lite version. It’s no limited time/brain dead demo version either. Resolve Lite is a fully functional application. It's just that it's limited to using a system with one GPU, so, all things considered, performance won't be as good in grading, let's say, 4K footage. To get the full version, you need only shell out $995. As we’ve mentioned, if you plan on purchasing the new Blackmagic Cinema Camera, you’ll get a full version of Resolve 9 for free. Resolve 9 runs on both Windows and Macintosh platforms. I reviewed it on an HP Z820 Workstation, a great machine to build a Resolve Suite around. Since it can take advantage of multiple Nvidia CUDA enabled GPUs, consider buying a good card as part of your plans if you true interactivity. See Blackmagic’s website for more information.

Original article: Blackmagic Does It Right:
DaVinci Resolve 9 Review

©2013 NYC Production & Post News. All Rights Reserved.

A Look at SpeedGrade: Professional Color Grading within Adobe Creative Suite 6

$
0
0

SpeedGradePostImage

A Look at SpeedGrade: Professional Color Grading within Adobe Creative Suite 6

--This review is the first of a two-part series on Adobe SpeedGrade.  The release of the Production Premium version of Adobe Creative Suite 6 (CS6) last year contained a wealth of important new features in its various apps. I've already written an in-depth review of Premiere Pro that, in CS6, delivered on its promise of becoming a truly professional NLE. After Effects, a standard part of the toolkit for motion graphic artists and compositors everywhere, also had its fair share of important improvements, including a 3D motion tracker, ray tracing and enhanced caching. However, there is an important and entirely new addition to the Creative Suite that I haven't reviewed: Adobe's pro color-grading solution SpeedGrade. It's something I have been meaning to do for some time now, so let’s dive in.

Filling in the Gaps

The Production Premium version of Adobe's Creative Suite remains unique in the world of motion media app collections. This remarkably full featured and widely used creative environment for production and postproduction has advanced tools for visual effects, editing, painting, retouching, graphics, pre-production, and audio production. But even with all of its capabilities, previous versions of Creative Suite lacked a truly professional color correction and grading application. That changed in CS6 after Adobe rolled in SpeedGrade, a professional and highly regarded color-grading program originally developed by Munich, Germany-based IRIDAS. Now, Creative Suite has just about everything you would need to put together and finish a complete production, with the possible exception of a high-end 3D animation package, although Maxon's CINEMA 4D works so well with the suite, it is practically a member of the club. But let’s not pretend though that color correction wasn’t possible within CS before SpeedGrade. After Effects and Premiere Pro have delivered color correction for years via capable plug-ins such as Red Giant’s Magic Bullet Suite and Synthetic Aperture’s Color Finesse. Indeed, some are satisfied with Premiere's own built-in three-way color corrector. However, none of these are a replacement for the real thing. The addition of SpeedGrade to Creative Suite not only provides a production proven, industrial-strength color grading environment, but will change the way artists and filmmakers work.

Accessing Media

When you first start up SpeedGrade's slick new interface, you enter the Desktop display. Here’s where you target the media you want to work on. Powered by the 32-bit floating point Lumetri Deep Color Engine (SpeedGrade's underlying GPU accelerated technology), the new app supports a wide range of RAW file formats from cameras such as ARRI and RED, as well as common interchange formats such as QuickTime, DPX and OpenEXR.
The media window window displays clips on your drive as thumbnails
On the upper left of the interface, you access the drives and directories on your system, including the clips, EDLs and SpeedGrade projects (.icrp files) you want to work on. By clicking on a folder that contains media, its clips are represented inside the center window as a grid of thumbnails. If the size of the thumbnails are too small, you can scale them up with a slider. Meanwhile at the top of each thumbnail a mini timeline allows you to scrub through the clip and tells you which frame you're on as well as the total number of frames in the clip. You can also view the media in list view, which provides a columnar layout of the clips. Clips can be sorted by name, timecode range, resolution, or date. To make searching easier, you can filter the window to display exactly the media you want to view. For example, suppose you navigate to a folder that contains media in many different formats. You can tell SpeedGrade to only display the QuickTime movies, RAW sequences or EDLs, for example. This feature is a timesaver. Once you've located the media you want to work with, you'll need to get it into the timeline in order to start color correcting. Besides using the playback head in the timeline to scrub through the shot, there are standard playback controls located at the bottom of the monitor. You can also use the spacebar to play and pause the footage (or you can use the industry standard J, K and L keys). Left and right arrow keys allow you to step through the frames.
After importing your clips, switch to monitor view to start the process of color grading.

Coming from Premiere

You have two basic ways to work with your footage. Some might like to grade their clips before they edit. In this case, you can simply import clips and scenes into SpeedGrade, color grade them and then export for use in Premiere or another NLE. Others may prefer to export an EDL (edit decision list) from their NLE's timeline which contains detailed information about the edit and import it into SpeedGrade for color grading where it links back to its original media sources. Another, and perhaps the best approach, is to use the Send To Adobe SpeedGrade command inside of Premiere's File menu. By invoking this command, Premiere will render a 10-bit uncompressed sequence in the DPX format of the edited frames as well as a SpeedGrade .ircp project file, which SpeedGrade CS6 opens automatically. This is best done when the final edit is finished and the picture is locked. While Premiere Pro is widely lauded for allowing native editing of many different file formats including AVCHD and other prosumer camera formats, SpeedGrade doesn't support quite as many just yet. Thus you may have media that lives in your Premiere timeline that SpeedGrade won't link to if you export an EDL. This will likely change in the next release, however. For now, you may wish to stick with the Send To SpeedGrade command. Note that the final purchase of SpeedGrade from IRIDAS came late in the release cycle for CS6. This meant that the IRIDAS developers (who now work for Adobe) didn't have all that much time to fully integrate it with Premiere Pro. You can be pretty sure, that the coming version of SpeedGrade will support all the codecs that Premiere does (probably some time around NAB). In addition, I would expect to see tighter integration with Premiere's workflow, perhaps even a fluid Dynamic Linking system between Premiere and SpeedGrade, similar to what’s now offered between Premiere and After Effects.

Making the Grade

So far we've talked about getting things into and out of SpeedGrade. Now let's get down to what we’re really interested in, color grading. Color adjustments are done in the Look panel on the lower section of the interface (accessible by clicking the Look tab). Here, you'll find color wheels that control offset, gamma and gain. By right clicking on these wheels, you can enter a virtual trackball mode where the scroll bar adjusts the luminance and the mouse position adjusts chrominance.
SpeedGrade's color wheels and adjustment sliders.
Meanwhile, sliders control saturation, pivot, contrast, temperature (warm to cool), magenta to green balance and final saturation. These controls can be adjusted for allover effect or restricted to shadows, mid-tones or highlights.

Building Up Layers

I really enjoy SpeedGrade’s ability to build up your grades in non-destructive layers. That means you can stack up primary and secondary corrections, filters, LUTs and effects and, if you like, rearrange them for different results, since different ordering of these elements yields different results . Working with layers is done in the Layer area on the left of the Looks tab.
SpeedGrade's versatile Layers panel
Aside from the ability to stack them up, a great feature of layers is that you can control the opacity level of each one. Similar to Photoshop, each layer has an opacity slider can be dialed in and out to increase or decrease its effect. Note that you can quickly toggle the layer on and off by clicking a small icon of an eye next to the layer, in an Adobe-like manner. There are different kinds of layers you can make. Let's start with the basics: primary and secondary color corrections. Primary color corrections affect all of the colors in your scene. Secondary color corrections, meanwhile, are applied to specific color ranges, allowing you to selectively accent, modify or tone down individual colors in your image. You can include as many secondary color corrections as you like — allowing you to affect different color ranges separately. During my time with SpeedGrade, I appreciated how I could isolate the specific colors I wanted to affect during secondary color corrections. Through a combination of the eyedropper tool and a series of HSL controls in the Look panel, it was easy to select the exact color range I wanted. You can also gray-out all the colors in the image except for the color range you have defined. This is very useful in making sure that you are selecting and affecting the precise colors you desire.
Graying out all the other colors except for the color range you are working with
By clicking a button at the bottom of the layer pane you can choose from a variety of effects such as Gaussian blur, fxBloom, dithering, tinting and many more. Each one of these effects can be a layer. As is the case with any layer, effect layers can be dialed in and out with the opacity slider. Another type of effects layer are LUTs (look up tables). These allow you to emulate different film stocks and stylized looks as well as various camera profiles. Another useful feature in SpeedGrade's timeline is the grading track. It functions in a similar way to adjustment layers in After Effects. Use it by first applying your layers to a grading track, then do things like stretch out the grading track over a range of clips in your timeline. This saves you the hassle of applying the same grade over and over again, if you just want to give the same look to all the shots in one scene.

Have a Look

Under the color controls is the Look pane, which allows you to access collections of predefined Looks , which you can apply to your clips. These looks are designed to affect the colors and tones of your footage in different ways such as that of the latest blockbuster movie, a retro sixties film look or a bleach bypass style. Like any plug-in, you can also use Looks developed by others.
Some of the predefined Looks that SpeedGrade provides
Looks may give you exactly what you need from the get-go, or they can be customized to create your own personal grades. Of course, you can save your own looks as well, whether you build one from scratch or use a predefined one as a starting point. Look files can always be edited later. You can even export them as LUTs for use with other applications.

Scoping Things Out

As you would expect in a pro color grading app, SpeedGrade contains a waveform display, vectorscope and histogram, standard tools used for precise and informed adjustments when color balancing. For example, with the help of the waveform display you can accurately balance blacks and whites, or fine-tune the values of the separate RGB color channels.
The vectorscope, waveform display and histogram

Masks and Vignettes

SpeedGrade contains masking tools that you can use to limit grades to a certain area or create soft vignettes. Working with masks is handled in the Mask panel. Upon making a mask or a vignette in SpeedGrade, the mask widget appears in the center of the mask. The widget is a useful graphical control that lets you modify the shape of the mask as well as its falloff, scale, rotation and so on.
The mask widget comes in handy when modifying masks
You can add and remove points to a mask and edit them to make virtually any shape you like. SpeedGrade also has the ability to automatically track a mask to moving footage.

Other features

We've touched on many of the important features of SpeedGrade, but there is much more to know about it. For example, using a mouse and keyboard can be slow and tiring, so SpeedGrade supports the Tangent CP200 and Tangent Wave control surfaces, which are in common use by full-time colorists. Aside from providing knobs and sliders that you can manipulate without looking at, you have to admit that control surfaces just make your setup look cool. SpeedGrade also contains advanced stereoscopic capabilities that, for example, let you adjust the viewing depth for objects within stereo 3D space. The app is fast too, living up to the “Speed” in its name (you might be startled on how quickly the program loads compared to most apps). SpeedGrade is fast since it mostly runs via NVIDIA CUDA GPU acceleration (non-CUDA capable cards will be disappointing to use!). If you rock on an NVIDIA Quadro series 4000, 5000 or 6000 card, expect to see the sort of quick response and real time performance that will impress a client sitting by your side.

Concluding Thoughts

SpeedGrade is a full featured and mature color correction system that's a great addition to Adobe's Creative Suite. I think it occupies an important space that needed to be filled in the Production Premium suite, namely pro-level color grading. Adobe made a smart choice in its purchase of IRIDAS, the team that built SpeedGrade and you can be sure that the app will continue to improve. As mentioned, I expect the next version to include tighter integration with Premiere Pro and support for all the same codecs. In addition, SpeedGrade training resources will proliferate in the future. According to my vision, the future looks bright for SpeedGrade. If you are already a professional colorist working on spots or features, or if you're a video editor or effects artist that is thinking of getting into color grading, SpeedGrade is a package worth well worth learning. Current users of the Creative Suite will find it a natural choice for color work.

Original article: A Look at SpeedGrade: Professional Color Grading within Adobe Creative Suite 6

©2013 NYC Production & Post News. All Rights Reserved.

Will MAXON’s Cineware Bring True 3D to Adobe?

$
0
0

CinewarePostImage

Will MAXON's Cineware Bring True 3D to Adobe?

Will After Effects finally get real 3D chops? Among the most striking new technologies on view at NAB 2013 was Cineware, which allows for unprecedented integration between the 3D world of CINEMA 4D and the 2D compositing universe of After Effects. CINEMA 4D and After Effects have already played well together thanks to C4D's After Effects plug-in, which is currently available on the MAXON website. With the plug-in, you can do things like generate an After Effects project for compositing from within C4D. You can even export lights and camera data, along with external compositing tags to export an object's position and rotation information. However, when it came to working with the final frames, you still needed to render out your animation into image sequences (or movie files) before you could composite them, add effects or make color adjustments within After Effects. That’s the way people have worked for decades: Render out the 3D, then composite it later. The problem with this approach, however, comes when you want to change something in your 3D scene such as a change to texture, light, or camera position after you've done the final render and you’ve already started compositing! This can cause serious delays on your project since 3D rendering can be very time consuming. Having to re-render just because you made a small change in your 3D scene, really sucks (to put it bluntly). Until now, it was just a reality we had to live with.

Look Ma, no rendering!

Now, imagine not having to pre-render your 3D before you bring it into After Effects. What if you could bring your C4D file into After Effects and treat it like any other imported footage layer? This is no pipedream; Rather, it's Cineware — the direct pipeline between CINEMA 4D and After Effects. Here’s how it works. Suppose you decide to change something in the 3D scene while you're compositing it in After Effects. With Cineware, simply open the original file in CINEMA 4D, change what you want, save it and it’s automatically updated in After Effects. No need to lose sleep waiting for it to render over again. This is a major development. Cineware radically changes the workflow between these two programs. No longer are the two worlds of 3D and compositing separated. Instead, Cineware allows you to easily move back and forth between the two domains tweaking things in both programs until you get exactly what you want. When you're satisfied, you render everything together at the same time (2D and 3D). Cineware takes care of the details, calling upon CINEMA 4D's own rendering engine right inside of After Effects.

3D Cameras and more

Besides the ability to render your 3D scene from within After Effects, the Cineware plugin offers lots of other important functions. You can extract a camera from the C4D scene, for example, and convert it to an After Effects camera. If there is more than one camera in the scene, you can select precisely the one you want. Cineware also lets you use an After Effects camera to navigate through your C4D scene and animate it there as well. Very useful. Besides cameras, you can also use Cineware to extract other 3D data from inside CINEMA 4D into After Effects. This includes lights as well as the solids and nulls created by external compositing tags (which can be replaced with 3D layers and effect points in After Effects). Cineware also lets you send After Effects cameras back to CINEMA 4D. To send other 3D data from After Effects such as lights and 3D solids, you can export a C4D scene from the file menu, similar to the way you did it with the plug-in. Another really great feature of Cineware is its ability to pull out multi-pass layers from the CINEMA 4D scene. These can be object buffers that you set up with a compositing tag, which can be used as traveling mattes in After Effects, or diffuse, specular, shadow, reflection, ambient occlusion, depth or any other multi-pass layer. Once inside of After Effects, each layer is designated its appropriate transfer mode. Many people prefer to composite with multi-passes since they let you precisely control the look of the final render. For example, you can control how strong or faint your reflections and shadows are in your comp or dial in the level of AO you want while compositing. CINEMA 4D also lets you render lights in separate passes so you can increase and diminish each individual light in After Effects, or even recolor them. Of course, object buffers are crucial to separate a 3D object from other objects or the background or foreground in your comp. Again, remember that all the multi-passes and object buffers are live and are thus rendered on the fly.

Let there be Lite

Perhaps just as important as Cineware itself, is the exciting news that the next version of Adobe After Effects will bundle in CINEMA 4D Lite. Though the Lite version is missing some important features found in the commercial version, don’t think that C4D Lite is a mere exporter or importer. Far From it. CINEMA 4D Lite is a capable and fully functional 3D program in its own right. There’s a long list of what Maxon's CINEMA 4D Lite will deliver: you can make 3D text; import Illustrator outlines; open complex models you purchase from TurboSquid or Pond 5 and texture, light, and animate them; lathe, extrude, loft and sweep splines to create all kinds of useful models; and subdivide polygonal objects using the Hypernurbs object. C4D Lite also contains a wide range of parametric primitives such as spheres, cubes, toruses, cylinders, pyramids and many more. Lite also contains a wide range of tools to create all kinds of splines such as Bezier or Cubic splines, as well as many predefined spline shapes. There is also a tasty assortment of deformers such as twist and bend that make it easy deform and animate your objects. The Lite version also contains the Fracture Object, which is activated when you register the program. By grouping objects under a Fracture Object, you can use some MoGraph effectors, such as the plain or random effectors, that are supplied in the Lite version. With all of these abilities, CINEMA 4D Lite is a really useful, full-fledged 3D program, especially for those who are starting out with 3D. Those looking to step up to the Broadcast or Studio versions of CINEMA 4D from Lite, will find an easy upgrade path. More details on that, I am told, will surface shortly. Keep in mind that Cineware is not restricted to working only with the Lite version. If you already have a commercial version of CINEMA 4D installed on your system such as Broadcast or Studio, After Effects will automatically recognize it and use it instead of the Lite Version.

Conclusion

CINEMA 4D and After Effects provide an extremely compelling solution for both 3D animation and compositing, providing everything you need for a wide range of projects whether motion graphics, VFX or character animation. The recently announced alliance between MAXON and Adobe, and the close integration of CINEMA 4D and After Effects via Cineware, provides an entire ecosystem that unites the worlds of 3D and compositing in a new way which will fundamentally change your workflow. If you work in compositing and effects, it's hard not to get excited about that. (Full disclosure: I represented MAXON at the unveiling of Cineware at the Adobe booth at NAB.)

Original article: Will MAXON’s Cineware Bring True 3D to Adobe?

©2013 NYC Production & Post News. All Rights Reserved.

Fun, Easy Character Animation with Reallusion’s CrazyTalk 7

$
0
0

CrazyTalkFeature

Reallusion's CrazyTalk 7 exists in some alternate universe where creating animation isn’t the laborious, time-intensive chore it typically is. Instead, the app allows you to import photographs of friends, celebrities, animals, or even 2D artwork and quickly make them move and speak. No, this isn’t Blue Sky animation we’re creating here, but if you're looking for a quick way to spruce up your website with a memorable talking host ala a Terry Gilliam creation or even create entire entertaining animations, this easy to use program might do the job. Anyone teaching a beginning animation class might also find the program ideal. With a close-up of a face or something similar to work on, you start by defining several key points on your character by clicking to create points such as the outline of his head, eyes, mouth, eyelids, eyebrows and hair. If you are familiar with morphing software, the process is somewhat similar. These points are the key to informing CrazyTalk how to deform the image in order to create phonemes (parts of a word) and facial expressions. You also have the ability to set the axis of the head as well as create the profile of the character. For example, if intend to animate a dog, you would choose a character profile with a very long snout. You can also add new eyes and teeth (including some creepy vampire teeth, if Goth is your scene). This is all done in a rough and ready style that’s a breeze to move through. After the fitting process is finished, you are ready to start animating. This would be like magic to the animators of yore, but sit back and let CrazyTalk do the animating for you. It’s simple: import an audio file or record your own voice into the program and CrazyTalk does all the work with its "auto-motion technology." This technology animates your character based on the energy of your voice. Reallusion has produced a couple of animations that explain the process of setting up the morphing controls for a character from a photograph:   http://www.youtube.com/watch?v=y6NSEoPq_0Q Here comes the vampire part...   https://www.youtube.com/watch?v=GZynd3GlmHc Besides CrazyTalk's automatic animation system, you might be intrigued by its puppeteering system. Another simple to use part of the app, you can quickly figure out how to animate a character in real time with your mouse. A useful touch is that you can apply the motion in layers. That means you don't have to get it perfect in one pass. For more information, check out the Reallusion website.

Original article: Fun, Easy Character Animation with Reallusion’s CrazyTalk 7

©2013 NYC Production & Post News. All Rights Reserved.

Making the Grade the Easy Way: SpeedLooks by LookLabs

$
0
0

SpeedLooksFeature

Recently, I reviewed SpeedLooks for the Editors Guild magazine. This collection of 3D LUTs by Canadian based LookLabs enables you to take ungraded, log footage and easily deliver sophisticated "looks" similar to what you might see in the latest films and commercials. SpeedLooks can make your images seem as if you had spent hours painstakingly color correcting and grading your footage when all was needed was a few mouse clicks. You might use one SpeedLooks treatment to give your production a big blockbuster feel with popping colors; meanwhile others can deliver a more moody look, including one called NOIR for an effective black and white conversion. According to Jeff August, chief colorist at LookLabs, SpeedLooks' design are a mixture of careful film emulation and profiling combined with tasteful and informed creative color decisions which he hopes will one day become industry standards. At present, SpeedLooks work both in Adobe SpeedGrade and Blackmagic Design's Resolve. You’re not frozen into just one look either: you can apply an effect to your footage as the top layer in SpeedGrade or the final Node in Resolve and continue on from there with additional grading, working non-destructively in between the SpeedLook and your footage. Another interesting feature of LookLabs' technology is their camera patch technology, which makes the final, graded result more predictable when using cameras from manufacturers such as ARRI, RED, Sony, and Canon. So if you don’t have the luxury of having your own colorist to work with, using SpeedLooks is the next best thing. To get the whole scoop on how SpeedLooks works, read my full review in CineMontage magazine.

Original article: Making the Grade the Easy Way: SpeedLooks by LookLabs

©2013 NYC Production & Post News. All Rights Reserved.

Checking Out Autodesk’s Entertainment Creation Suite 2014

$
0
0

maya_2014

Checking Out Autodesk's Entertainment Creation Suite 2014

At NAB 2013, Autodesk demo’d its latest Entertainment Creation Suite 2014. Now, it’s delivering and ready for action. With it comes updated versions of each of the important apps included which are aimed at the animation, visual effects and game industries. The Entertainment Creation Suite (or ECS) comes in several flavors depending on your needs. With the Standard Edition, you get your choice of the main “hub” animation program that will anchor your production, either Autodesk Maya 2014 or Autodesk 3ds Max. You also get MotionBuilder 2014, used to capture, edit and playback character animation; Mudbox 2014, Autodesk’s sculpting and painting tool; and Autodesk Sketchbook Designer 2014 a drawing and painting program used for design and conceptualizing ideas. Spend some more money for the Premium version and you’ll add SoftImage 2014 to the bundle of your choice, a useful addition for those who really need to extend the whole content creation pipeline with its advanced visual effects and 3D character animation. In fact, some may prefer its animation capabilities to others in the suite. For those who really want it all, the Ultimate version contains all of the products mentioned above. While the fully tricked out Ultimate edition will set you back $8395 (list price), that same box of discs is only $250 for students. Seems to me that Autodesk really wants to grow its future market with that low-ball pricing. In this review, I’ll take a look at what’s new in Maya as well as Mudbox and MotionBuilder. While 3ds Max and Softimage have their fans, Maya is perhaps most popular among those who use Autodesk products for high-end film and television production.

Maya's New Features

One useful innovation in Maya is a new system for handling large and complex scenes while avoiding the usual huge memory overhead. Called Scene Assembly, it speeds up the loading of large datasets while increasing viewport interactivity, helpful for large epic battle scenes or complex worlds. Here’s how it works: If you need fast loading and optimal playback while you work, you can use a cache representation of a production asset. If, instead, you need high-resolution geometry for rendering, you can instruct the system to use a scene representation of the object. Thus you can manage the complexity of the scene at the object level by switching between different versions of the objects in your scene, deciding between high performance and detail. When you get used to switching between these approaches, you’ll find that large hierarchical scenes such as complex city layouts and other massive assemblies become easier to navigate and work with. Maya 2014 now contains a Grease Pencil, a useful new tool that allows you to draw with a virtual marker using a pen (or mouse) right inside Maya’s viewport. This allows you to quickly create multiple sketches at different frames (and onion-skin them if you want). Grease Pencil is useful both for animators and directors to make notes and revisions to a scene or quickly block out poses and sketch out lines of action before an animation sequence is even begun. Maya’s joint tool offers new options. Now you can create symmetric joints and joint chains. This is something that should please character riggers, since most rigs have some kind of symmetry to them. Another related new feature is the ability to snap a joint to the center of a volume with the Snap to Projected Center button. This comes in handy when trying to place a joint in tricky areas such as the center of fingers, a challenging thing to do no matter what program you use. Maya artists rely on the Node Editor to visually create complex relationships between objects, attributes and functions. In Maya 2014, new workflow improvements to the Node Editor allow you to customize the view of nodes by filtering their attributes and customizing their colors. In addition, creating connections between attributes is simpler and more intuitive with several hotkeys having been added. Maya 2014 now includes DirectX 11 integration (in addition to OpenGL mode), a extremely useful feature, especially for game designers. The DirectX UberShader allows such things as tessellated displacement, translucency, blurred reflections and shadows. You can also mix Maya’s native shaders with the new DirectX 11 shaders. While Maya’s polygonal modeling functionality is capable enough, many have felt that it could do better, giving the edge to other programs including 3ds Max. Maya 2014 addresses this perception by integrating a new Modeling Toolkit (previously Digital Raster’s NEX plug-in). With the new toolkit come workflow enhancements such as tools for mesh editing and creation, pre-selection highlighting, vertex locking, slide components and ring/loop selection all within a single palette for convenience. Maya’s Paint Effects is a brush-based tool that allows you to create 3D geometry, such as leaves or flowers, by just painting it in. In Maya 2014, there are new enhancements to paint effects which improve the way geometry created with it interact or collide with surfaces, volumes and each other. For example, say you have used paint effects to create vines or ivy. With the new Surface Snap and Surface Attract attributes, you can easily make the vines cling to or follow the surface contours of objects in your scene. Other new features in Maya 2014 that I think are useful include a new file path editor, a crease set editor, and polygon reduction improvements.

Mudbox and MotionBuilder

Mudbox, Autodesk’s capable sculpting and painting solution, now has re-topologizing tools (or remeshing if you prefer), perhaps its most important new feature. Remeshing is a hot topic these days with other tools on the market adding re-topologizing functions including CINEMA 4D and ZBrush. Here’s why remeshing is important. While sculpting permits for highly detailed and intricate models, often the underlying mesh is deformed in the process in such a way that makes it impractical for animation. In other words, the finished sculpt may look amazing, but the edge flow and underlying topology of the base mesh can be stretched or skewed in a way that makes it inefficient. The new remeshing features allow you to “re-topologize” the base mesh so that it’s better suited for animation. With it, you can specify a target polygon count and use curves to guide the edge flow during remeshing. Naturally, you can preserve the detail from the original sculpt which gets applied to the new re-topologized mesh. In addition, Mudbox 2014 now recognizes multitouch gestures supported by vendors such as Wacom and others. This means that, besides using the stylus to paint or sculpt with, you can use the fingers of your other hand to tumble, zoom, or roll the view in a tactile way without relying on the keyboard. MotionBuilder’s new features include Flexible Mocap. This new optical marker data solver allows you to take data generated by optical markers (used during the motion capture process) and have it directly drive character joints (with squash and stretch capabilities). If you work with motion capture data, this will be a useful feature.

Conclusion

With every release of the Autodesk Entertainment Suite come important new enhancements that increase its usability and functionality. For those who use Autodesk tools, this year’s useful features, such as Scene Assembly and retopology tools make it worth upgrading. Visit Autodesk’s M&E website here for more information.

Original article: Checking Out Autodesk’s Entertainment Creation Suite 2014

©2013 NYC Production & Post News. All Rights Reserved.

The G Spot of Primary Storage? G-Technology’s G-RAID Drives Reviewed

$
0
0

Visit us on the web at http://nycppnews.com

GRAIDPost

I recently wrote a review of G-Technology's massive external dual-disk G-RAIDs. (I did it for CineMontage Magazine, the official publication of the Motion Picture Editor's Guild - there's a link to the review provided below). On first sight, it's the G-RAID's sturdy construction and rugged good looks that impress you. But it's also not hard to be struck by the massive capacity in such a compact unit. Editors will appreciate their speedy performance of course. No surprise, as the drives come with Thunderbolt ports as well as USB 3.0. In my review, I compared the performance of the G-RAIDs to a range of other drives -- SSDs, SSD Raids and external USB 3.0 disks -- and presented the results in a graphic chart. I also took a close look at the features of these heavy duty storage solutions from G-Technology, a company with a long history in our industry. If you are involved in the creation of media, from motion picture editing, audio recording, to visual effects, you'll want to see what I found out. Click here to read the full review on the CineMontage website.

Original article: The G Spot of Primary Storage? G-Technology’s G-RAID Drives Reviewed

©2014 NYC Production & Post News. All Rights Reserved.

The post The G Spot of Primary Storage? G-Technology’s G-RAID Drives Reviewed appeared first on NYC Production & Post News.


Silhouette V5: Much More than Just Rotoscoping

$
0
0

Visit us on the web at http://nycppnews.com

SilhouettePost

Silhouette V5: Much More than Just Rotoscoping

In the world of visual effects, rotoscoping is a critical component of most any project. Whether you need to isolate elements from their backgrounds, integrate CGI with live action footage or key out actors from a green screen, vfx artists know that rotoscoping and matte generation are a big part of the job. However, "roto" work can be tedious if you’re not using the right tools. There are a number of software tools already on the market. Silhouette, from SilhouetteFX, however, is a stand out and it doesn't hurt that company partner Perry Kivolowitz has taken home a technical Academy Award and some Emmys for his work in developing visual effects tools, for one. Silhouette, now in Version 5, was designed to make the process of rotoscoping and matte generation easier, and those are surely two of its strongest features. However, there’s a whole lot more to Silhouette, including features such as industry-leading morphing, warping, planar tracking, nondestructive paint and impressive stereoscopic tools. Recently I checked out Silhouette V5. No surprise, but I found an even more sophisticated tool set that can play an important role in any VFX pipeline. Others must think so too, as it has turned up at hot companies such as Framestore, who used it on the production of Gravity, while Weta Digital employed it on The Hobbit: The Desolation of Smaug.

Serious Rotoscoping Chops

Silhouette’s rotoscoping tools allow you to make multiple shapes for matting out elements in your images. You’ve got the choice to use B-Splines, X-Splines and Bezier curves; you can also easily create primitives such as circles and squares. Shapes can conveniently be combined with or excluded from each other. Meanwhile you can place multiple spline masks under a layer group to transform, rotate or scale them with one widget, which is similar to parenting elements under a null object. I opened a shot of a New York street in Silhouette to roto a car (above). To start, I could sense right away that a lot of thought had gone into the whole roto process, as everything worked in a fluid, smooth way. I really liked the fact that I could use B Splines and X Splines, either of which I find easier to manipulate than Beziers for this kind of work. Easily accessible controls, meanwhile, enabled me to blur the mask as well as shrink it, grow it and change its stroke width. Motion blur can also be enabled for each spline for enhanced realism too. Naturally there are controls to adjust the motion blur's shutter angle, phase and number of samples. In addition, you can feather the spline at any place along its edge with the intuitive and handy feather tool. New in Silhouette V5 is the ability to create IK (inverse kinematic) chains for your masks. 3D character animators know what IK is since it’s a key technique when creating skeletal rigs. Here's where IK makes the creation of natural looking limb movement much easier while cutting down on the number of keyframes needed. In Silhouette, you create IK hierarchies by parenting masks to each other. After setting the positions of the joints, Silhouette then automatically generates bones. Using multiple masks in conjunction with inverse kinematics for rotoscoping is an innovative approach that I haven’t seen before.

Warping and Morphing

Now we come to what I consider one of Silhouette V5's unique strengths - its morphing abilities. When I was getting started in visual effects in the mid 1990s, morphing was a wildly popular effect that was appearing in all kinds of productions. The craze was undoubtedly spurred on by Michael Jackson's popular music video for 'Black or White', which featured a sequence of people morphing into each other. That effect caused quite a stir in VFX circles (things like that made a big splash back then). Morphing soon became so popular I would often hear the phrase “Can’t we just morph it?” in the studio I worked in, similar to the old production line of "Can't we just fix it in post?". Another popular use of morphing could be found in Star Trek: Deep Space Nine. The Paramount TV series featured a character named Odo, who could change or morph into someone else. Odo’s morphs were done using Elastic Reality, which at that time in the 1990s was a very popular morphing app. I used it, as well as a program called Gryphon Morph. Unfortunately, those two programs fell to the side. Gryphon Morph just disappeared. Elastic Reality, meanwhile, was purchased by Avid which then decided to discontinue it in 1999. Had morphing fallen out of fashion? Even if it's not wildly popular, morphing remains an impressive and compelling effect if used selectively. I recently needed to create a morph, but at first could not find any good software that would handle it. Frustrated, I wondered why something that I considered a staple of visual effects suddenly didn’t have any capable software available. That is until I checked out the morphing abilities in SilhouetteFX. Not only was it capable, but its morphing abilities kind of blew me away. In fact, this makes sense since Silhouette's morphing is created by Perry Kivolowitz. With a long history creating crucial software for our industry, Kivolowitz received a 1996 Technical Achievement Award from the Academy of Motion Picture Arts and Sciences for the co-invention of shape-based warping and morphing in Elastic Reality. No wonder it works. While warping and morphing are related, naturally there are differences. Warping is the transformation of the pixels in a single image. By warping you can create interesting effects such as making facial features bulge, distorting images, or making shoulders shrug. Morphing, on the other hand, is the transformation and blending of the pixels of one image into another one which can give the effect of a person or a creature transforming into another one. In Silhouette, warping and morphing are done with source and target splines. The way pixels morph can be precisely controlled with the all-important correspondence tool. You need to use it, as without it pixels can move in unexpected and unnatural ways. The warping of images can also be limited by barrier splines, which restrict areas you do not want to distort. In addition you can vary the rate that different source and target splines morph on a shape by shape basis, rather than having the entire morph happen concurrently. This allows for a more natural looking morph. Silhouette has an extremely elegant morphing solution. It not only offers a high level of control, but has very snappy performance. In short, I am really glad to find a morphing program that works so well. I’ve sorely missed morphing and it’s great to have it back — a great effect that might just be ready for a comeback. Long live morphing!

Tracking with Mocha Pro

Tracking is key in visual effects work. SilhouetteFX covers that with its point and planar tracking capabilities. Planar tracking is important since it allows you to track an entire plane of shapes at once. This can seriously reduce the time spent cutting individual mattes. All you need to do is track a plane and all the mattes that lie on the plane automatically fit over the course of the scene with very few keyframes needed. While previous versions of Silhouette already had a planar tracker, Version 5 builds in Imagineer System’s award winning mocha Pro planar tracking technology which makes a great addition to the app.

Painting Innovations

The ability to paint nondestructively within Silhouette allows you to correct problem areas, perform rig removal, and fix facial blemishes. The painting system in Version 5 now includes Auto Paint. This is a breakthrough innovation which provides the speed and detail of raster-based paint systems with the repeatability and animation capability of vector-based paint systems. This marriage of raster and vector techniques allows you to capture each stroke you make into a database. You can then instantly play them pack to paint over all the frames in a sequence. What I like about this approach is that you can use keyframes and trackers to control where the repainting of the strokes happens. This is an important and timesaving innovation that will shave hours off otherwise tedious painting challenges.

Moving in Stereo

Silhouette's stereoscopic tool set is deep. It was the first roto and paint system to offer a full stereo workflow, which has made Silhouette a top tool in the process of converting 2D movies to 3D. Version 5 of Silhouette now contains an optional S3D node for those working on 3D films. This incorporates RealityTools technology from 3D Impact Media, a new strategic partner to SilhouetteFX. For an additional cost, the S3D node facilitates 2D to 3D conversion and stereoscopic processing in several ways. For example, Silhouette’s roto tools have become depth-enabled and new roto tools such as ramps, hallways and tunnels have been added. In addition, depth maps can be built in Silhouette from both mono (non-stereo) as well as stereoscopic images. Other important upgrades to the stereo features include the ability to finely adjust parallax and convergence during post production as well as the ability to generate virtual camera views. All these innovations as well as the integration of RealityTools in Silhouette make it a robust solution for working in stereoscopic 3D.

Conclusion

Silhouette is a well designed program, with an intuitive user interface. Its sophisticated rotoscoping abilities could save you days of work. The new mocha-based planar tracking and hybrid paint system are powerful tools that make the whole app even more potent. For those working in stereoscopic 3D, Silhouette’s toolset is remarkably capable and has become a comprehensive application for 2D to 3D conversion. V5’s integration of technology from 3D Impact Media’s RealityTools makes it even better. The morphing and warping system built into Silhouette is simply top notch. I am pleased that I can turn to a tool that does this so well thanks in a big way to Perry Kivolowitz, an innovator who has helped create image morphing to begin with. The standard version is $1495. If you're doing that 2D to 3D movie conversion, you'll need the S3D option version at $3995. For a full list of the many enhancements to V5, visit the Silhouettefx website at www.silhouettefx.com.

Original article: Silhouette V5: Much More than Just Rotoscoping

©2014 NYC Production & Post News. All Rights Reserved.

The post Silhouette V5: Much More than Just Rotoscoping appeared first on NYC Production & Post News.

The Foundry’s NUKE 8 Reviewed – And Thoughts About NUKE STUDIO

$
0
0

Visit us on the web at http://nycppnews.com

NukePost

The Foundry’s NUKE 8 Reviewed – And Thoughts About NUKE STUDIO

I recently wrote a review of the latest version of NUKE, The Foundry’s popular node-based compositing and visual effects app. In writing about Nuke 8, I go over the latest advancements to the program, such as the new Dope Sheet, which presents artists with a timeline and keyframe-based approach. Here is a link to my review of NUKE 8, which appeared in CineMontage magazine.

The major studios already rely on NUKE to create effects for many of Hollywood’s most visually stunning, effects laden movies, including Iron Man 3, Lincoln, and The Hobbit.

The Foundry, however, doesn’t seem satisfied with that, and is taking aim at bigger markets. At NAB, the London-based company announced that it will release something really new, NUKE STUDIO, due to come out later this year.

If you haven’t heard about it already, here’s what the fuss is all about: NUKE STUDIO is a complete end to end node-based VFX, editorial and finishing solution that can be used by individual artists working independently in their own creative studios as well as by large teams working in collaborative environments. If you’re thinking that sounds a lot like Autodesk’s Flame, a long-time fixture in high-end effects, you might be right.

As a highly integrated application, NUKE STUDIO aims to allow you to take a project from start all the way through to finish. As part of that control, the app will integrate HIERO, The Foundry’s media management and versioning tool that allows artists to share scripts and work collaboratively with annotations.

In addition, NUKE STUDIO will offer real-time GPU accelerated effects that can be added directly in the editing timeline. Of course, if more complex effects work is needed, you can easily switch over to NUKEX, The Foundry’s full-featured node-based compositing and effects environment.

NUKE STUDIO will also offer real-time 4K playback within the application itself or through SDI-Out hardware. Real-time 4K playback makes NUKE STUDIO a serious contender for client attended sessions where smooth playback of material at the highest resolution is essential to keep things moving along.

The new suite also contains intelligent built-in rendering features which will automatically make use of render farms and other available resources on a network.

I think this is exciting news. From what I saw at the demo at NAB, NUKE STUDIO promises a very capable and fully featured tool set. As mentioned, NUKE already has a high level of buy-in by many who work on effects-heavy feature films. This acceptance could help give NUKE STUDIO compete with other integrated, collaborative end-to-end effects, editing and finishing solutions.

But we’ll just have to wait to learn whether that’s enough to make folks move from already established products like Flame or Assimilate’s Scratch. It’s a small market at the high end. Meanwhile, companies can’t charge as much as they once did for such potent software.

To watch a video of the complete unveiling event of NUKE STUDIO at NAB, click here.

 

 

Original article: The Foundry’s NUKE 8 Reviewed – And Thoughts About NUKE STUDIO

©2014 NYC Production & Post News. All Rights Reserved.

The post The Foundry’s NUKE 8 Reviewed – And Thoughts About NUKE STUDIO appeared first on NYC Production & Post News.

Free as Ever, Lightworks Finally Does Mac

$
0
0

Visit us on the web at http://nycppnews.com

LightworksPost

Free as Ever, Lightworks Finally </br>Does Mac

Since the advent of non-linear editing systems (NLEs) through to today, you’ve had a lot to choose from. Whether you like this or that approach to layout, like one way to use keyboard actions or another, you’ve been able to find something you like since these apps became available in the 1990s.

Today, after years of apps competing on this or that approach, it’s easy to find an NLE that does everything you want – though they still can’t do the edits automatically. Not yet anyway.

Along with editing programs that are more capable and more powerful, the cost of getting one on your desk has plunged. Prices have gone down, features went the opposite direction. That’s a pretty nice combination.

But even if you’re a regular editor you might not have heard much about Lightworks, a powerful and professional NLE that is free in the basic version, which has been running on Windows and Linux setups for years. (The supported version goes for $60 a year – still pretty cheap!)

Finally, a Mac version is out and running.

Considering how under the radar it’s flown over the years, Lightworks has been used to cut a surprisingly significant list of feature films and Hollywood blockbusters. The app has been used alone or in part to help top editors on productions that inclue The King’s Speech, Hugo, L.A. Confidential, Pulp Fiction and The Wolf of Wall Street.

Recently, I wrote a review of Lightworks for CineMontage,the magazine of the Editors Guild. To read my review and learn more about this surprisingly versatile NLE, find below a link to a PDF of the latest issue of CineMontage. To go directly to my review of Lightworks, just go to page 43.

CLICK HERE FOR THE REVIEW ON PAGE 43

If you are a Lightworks user, add your thoughts about the program in the comments below. I’d love to hear what you think.

Original article: Free as Ever, Lightworks Finally Does Mac

©2015 NYC Production & Post News. All Rights Reserved.

The post Free as Ever, Lightworks Finally Does Mac appeared first on NYC Production & Post News.

How Does Lenovo’s ThinkPad W540 Workstation Stack Up?

$
0
0

Visit us on the web at http://nycppnews.com

LenovoPostImage

How Does Lenovo’s ThinkPad W540 Workstation Stack Up?

In corporate America, millions rely upon their portable computers every day to do their work. But what if the usual run-of-the-mill machine isn’t enough? Video editors, 3D animators, VFX compositors, colorists and graphic designers are the first ones to require the extra horsepower above and beyond what the typical laptop delivers.

Extra horsepower means, of course, more powerful processors, better displays, sturdier builds, more RAM, the latest I/O ports, and so on. Industrial strength portable machines like these form a class of computer all their own. They’re known as “mobile workstations”, meaning they offer elements both worlds, desktop and laptop. You know the leading manufacturers by now who can deliver this: HP, Dell, Apple and Lenovo.

Even with the extra power that mobile workstations offer, in the past, post production professionals still needed to turn to their desktop workstation towers for demanding jobs such as feature editing projects, high end visual effects work and so on.

And that’s pretty true for the most part. Workstation towers still outperform mobile units for demanding work with their dual processor Xeon chips and 12 GB graphics cards. Plus they’re plugged into the wall, so power isn’t a problem.

However, I have noticed that the gap between the two is shrinking. Thanks to better CPUs and GPUs that have designs that sip power, mobile workstations are now able to take on many of the tasks that once were only relegated to the big iron.

Lenovo’s Push into workstations

Lenovo is well known in the business world for its highly regarded and best selling ThinkPad line of notebook computers (originally acquired from IBM who sold Lenovo its PC business in 2005). With the release of the W540, they have also become a compelling choice for production and post professionals seeking a mobile workstation.

The Lenovo W540 Mobile Workstation

Things are heating up in the more powerful desktop workstation tower market as well. That’s because Lenovo has unveiled plans for their P-series of Thinkstations – heavy-duty workstations built to handle the most serious production challenges. See our recent story about these impressive machines here. The new ThinkStations look very compelling. We’re interested in seeing how they fly so stay tuned for that. In the meantime, let’s talk about the ThinkPad W540 Mobile Workstation.

First Impressions

The W540 is a quite compact yet solid, a sturdy feeling machine with a size of 14.8″ X 9.8″ X 1.1″ and a weight of 5.57 lbs. While there is still a limit to how compact a seriously powerful machine can be, the W540 is smaller and lighter than previous generations of mobile workstations I’ve encountered, making it easier to carry around on long hauls. An ultrathin it’s not, but for a machine of its class, it’s quite compact.

The top of the W540.

In the looks department, it’s rather attractive, sporting Thinkpad’s characteristic, matte black finish that doesn’t show fingerprints. The ThinkPad logo on the cover sports a little LED power indicator light where the dot of the “i” should be. A nice touch.

One of the first things I noticed when I turned on the W540 is its screen. It’s an impressive 15.6″ IPS display, offering 3K resolution or 2880 X 1620 pixels. That’s a massive amount of pixels, which is useful when working on 4K VFX jobs, editing projects or complex GUIs — and is on par with the Retina displays available on the Mac Pros. For those who don’t want such high resolution, there is an option to get a 1920 X 1080 HD display. Personally, I like super high resolutions — in my book, the higher the better.

The W540 calls upon some familiar names to keep the color true. You can add an optional built-in X-Right color calibrator, which uses a sensor on the chassis of the machine, along with software by Pantone to calibrate your display. This is important if you are doing color grading or any other type of work that demands accurate color. You can run the calibrator at any time – it only takes just a few minutes for it to do its job. I find this to be a very handy feature to have, integrated right on your computer.

The built in color sensor which is used to color calibrate the W540’s display. To the right is the fingerprint reader.

Pantone color software works with the sensor to color calibrate your monitor.

The machine’s processor is the Intel Core i7-4800MQ, with a 22nm Haswell architecture at a base clock rate of 2.7 GHz. With Hyperthreading turned on, its four cores process up to eight threads in parallel, running up to 3.7 GHz with Turbo Boost. With an 8MB L3 cache and a 1600MHz front side bus, this chip smokes. Depending on what you’re doing, it might be all you need.

Ports and I/O

Something you’re sure to like on the W540 is the integrated Thunderbolt port, a speedy connection that has been slow to come to the PC side. Boasting twice the speed of USB 3.0 at 10 Gbps, Thunderbolt is a great for video editors to connect massive hard drive arrays, but it’s also ideal for products such as the UltraStudio Thunderbolt. Note that the Thunderbolt port also doubles as a mini Display Port for connecting external high-resolution monitors.

In addition to air vents, ports on the left side of the W540 include (from left to right) Thunderbolt, VGA, USB 3.0, USB 2.0, an Express Card port, 4-in-1 card reader and a mic/headphone jack.

Aside from the Thunderbolt, the W540 comes with 2 USB 3.0 ports (also quite snappy) as well as 2 USB 2.0 ports which are useful for legacy devices, mice, drawing tablets and so on.

The right side of the machine includes (from left to right) a dual-layer DVD burner, USB 3.0 port, USB 2.0 port and Kensington lock.

You’ll also find an Express Card slot for expanding your system’s capabilities with Smart Cards. In addition, there’s also a 4 in 1 memory card slot, handy for ingesting media from cameras or sound recorders. Other ports include a standard RJ45 port, microphone/headphone combo and a VGA port. The W540 also has a fingerprint reader for enhanced security.

The back of the W540 with the larger 9 cell battery power connection and air vents.

Graphics

An Intel HD Graphics 4600 (GT2) processor card, comes with the Haswell CPU. While the Intel 4600 GPU delivers fair performance, The W540 also came with a much more powerful graphics card: an NVIDIA Quadro K2100M Kepler based GPU — DirectX 11 and OpenGL 4.3 compatible and designed specifically for mobile workstations. Quadro, for those who are familiar with NVIDIA’s products, is their professional line of GPUs.

Bottom view of the Lenovo W540

The K2100M has got 576 parallel CUDA processor cores, 2GB of GDDR5 memory with a memory bandwith of 48GB per second and supports connected high resolution monitors up to 3840 X 2160. This GPU significantly outperforms its predecessor, the K2000M and is capable of handling high polygon count 3D modeling challenges, high resolution texture maps and GPU accellerated editing. If you’re an a game player, it will perform nicely at medium to high settings there too.

Other features

The W540 came with Windows 7 Professional. You can opt to get Windows 8.1 Professional if you prefer though many users choose Windows 7 (at least until Windows 10 comes out).

Regarding memory, the W540 comes standard with 8GB of DDR3 1600MHz RAM. The computer supports up to 16GB. There is also an integrated HD webcam.

The W540 also came with a 256 GB SSD drive as well as an 8x dual layer DVD recorder. However, there is an option to get a second 500GB or 1TB hard drive instead of the optical drive. Though you can connect fast external drives (or RAIDs) to the Thunderbolt or USB 3.0 ports as well, it’s nice to know that you can install another drive in place of the optical drive if you want, though you’ll need to get a special Bay Adapter first.

As far as battery life is concerned, the W540 offers a 9 cell battery, but if weight is a consideration, a smaller 6 cell one is also available. According to Lenovo’s specs, the battery will last upwards of six hours. This is pretty much in line with my experience. Of course it all depends on what you’re doing. Heavy duty work like 3D rendering on all cores will drain the battery more than writing emails. All in all a respectable time for a battery to last.

Dolby Home Theater 4 controls the audio on the W540 which comes with a ten band graphic equalizer.

For audio, the W540 has Dolby Home Theater 4 that offers increased audio clarity and distortion free high level output. There are two built in stereo speakers. Dolby Home Theater 4 has a nice control interface with a ten band graphic equalizer. You can call up built in profiles for movies, music, games or you can create your own sonic profile.

The keyboard of the W540 is comfortable to use; it squeezes in a numeric keypad, a feature which I like. The only thing that I found a little weird was the Function key at the lower left. I am used to having the Ctrl button there. You get used to it after a while though.

The Keyboard on the Lenovo W540 includes a numeric keypad.

The W540 includes a nice sized touchpad.

The touchpad is large and in my tests worked well. However, I almost never use touch pads though; as many artists and animators do, I want to attach a drawing tablet or a mouse. Nevertheless, if you are a fan of touchpads, I think you’ll like it.

Working with the W540

Whenever testing out a new machine for production work, one of the first things that I like to do is use it for video editing. For me a complex edit is a good indication of performance.

As soon as I started up the W540, I installed Adobe Creative Cloud, connected a GRAID 2TB RAID, imported a batch of footage I shot in London into Premiere Pro and set about editing together a sequence of clips.

Editing video in Premiere on the Lenovo W540

I then started setting in and out points on the footage, dragging clips into the timeline and doing cuts, trims, slides and other edits.

The W540 kept up quite well. Scrubbing the timeline was reasonably smooth and the interface kept up without much lag at all. Frame rates were good too. Of course, you can’t expect a mobile device to perform as snappy as a liquid cooled dual CPU workstation tower with an expensive GPU, but the W540 performed quite respectably and I was able to get the job done efficiently. Just a couple of years ago, this was the kind of project that could bring a mobile computer to its knees.

Next, I opened a 3D scene in MAXON’s CINEMA 4D and set about doing some modeling and renders. The NVIDIA K2100M card handled the interaction well, rendering texture maps and lighting in the viewport at a very respectable frame rate. Final render times fared well too. I also did some compositing in After Effects and things went smoothly there too. The verdict is that the W540 is able to provide ample horsepower for a wide range of professional production challenges.

Specs and Benchmarks

My benchmark testing for the W540 consisted of MAXON’s Cinebench, which is very good at determining how a machine handles 3D CPU and Open GL Rendering. I also ran PCMark 8’s Creative Test (Open CL Accelerated) which performs an array of video editing, encoding and photo-retouching tests (as well other tests such as video conferencing and internet surfing). Below are the results of the tests starting with Cinebench.

 

The results of the CineBench test for the W540 with a score of 71.65 fps for OpenGL and 634 for the CPU.

The machine received a score of 71.65 fps for the Cinebench OpenGL test (GPU). As you can see in the image above, compared to other machines in it’s class, the W540 performed exceedingly well and is at the top of the list, beating out an 4 core Intel Core i7-3840QM CPU with a Quadro K4000M GPU. That’s rather impressive.

For the CPU test, there are a variety of CPUs listed such as a 12 core Intel Xeon as well as 6 and 4 core i7 desktop processors. Of course you wouldn’t expect the 2.7 GHz Intel Core i7-4800MQ CPU in the W540 to beat those out, but it does sit comfortably atop of the other 4 core i7 mobile processors with a very speedy result of 634. Both Cinebench’s OpenGL and CPU scores were excellent.

The results of the PCMark 8 Creative Test (with the OpenCL acceleration option).

For the PCMark 8 test, the overall score was 3021, another high mark. See the image for individual marks in 4K video editing and photo editing as well as other benchmarks. To see how the W540’s PCMark fares with your computer, download PCMark 8 and run it on your machine or compare at FutureMark’s website.

Conclusion

The Lenovo w540 is a very capable machine, with reasonable entry level pricing of around $1330 per a limited time sale on Lenovo’s own website. This is a machine that can handle many tough production challenges including video, audio, 3D animation and visual effects work. If you work in production and post-production and are considering purchasing a mobile workstation, I recommend you check out the Lenovo ThinkPad w540, a machine among the top of its class.

Original article: How Does Lenovo’s ThinkPad W540 Workstation Stack Up?

©2015 NYC Production & Post News. All Rights Reserved.

The post How Does Lenovo’s ThinkPad W540 Workstation Stack Up? appeared first on NYC Production & Post News.

Blackmagic Design’s DaVinci Resolve 11 Adds NLE, Competes With Other App Suites

$
0
0

Visit us on the web at http://nycppnews.com

ResolvePost

Blackmagic Design’s DaVinci Resolve 11 Adds NLE, Competes With Other App Suites

Blackmagic Design’s DaVinci Resolve, Version 11 has been out for some time. So after getting to know it for a while, here are my thoughts about the latest version of one of the top color grading applications around.

DaVinci is a name that garners respect, gained from its years of heavy use by professional colorists and independent filmmakers. You’ll find it on set as well, helping DPs by adding looks to raw footage, or making a DIT’s media management easier. I’ll be releasing a more in-depth review of Resolve 11 in the near future that examines its new features more closely. Right now, I’ll focus on some of the big issues that makes V11 such a compelling release (and stirs up a bit of controversy too).

The Big News: Editing

While I’ll go into detail about the enhancements to its color correction tools, let’s address the one big thing that takes Resolve into a different arena entirely: by adding professional editing capabilities – including audio — right into the same screen you’d use for color correcting, Blackmagic is going after a much larger market, one that Adobe and Avid have been staking out for some time.

The color grading chops of Da Vinci are second to none. Now, besides top tools for color grading, you can edit, add transitions, do conforms and finish all in one application. No more fussing between the edit suite and the color suite. This is major. (For a list of all that DaVinci Resolve 11 now does in its various versions, visit this page on the Blackmagic Design site.)

DaVinci Resolve’s new editing environmentcontains features found in pro NLEs

Of course, Resolve 11 has its share of new features to its color correction toolset. Working in depth with RAW gets its due with the new camera RAW palette, which features controls for shadow, midtones and highlight recovery as well as color boost saturation, lift, gain and contrast adjustment. All of these capabilities should be very familiar to photographers who are moving into cinematography, only this time it works on moving images.

The new color match feature can automatically create a base color grade to shots that include a standard color chip chart at the head or tail. This is a smart way to get multiple shots to a common start point before diving into the grade. A bit like having your own DIT, Resolve 11’s new clone tool copies media drives, memory cards and camera packs to multiple places at the same time.

Regarding the NLE capabilities in Resolve 11, let’s just say that I am very impressed. Blackmagic has smartly gone after an all-on-one-plate approach that other product suites – did I mention Adobe yet? – are headed towards implementing.

Resolve’s editing tool sets are comprehensive, offering pretty much everything that you would need to do high-end professional work with multiple tracks of video and audio, transitions, titling tools, an audio mixer and more. All you need to do to grade a clip in the timeline using Resolve’s top notch color correction tools is to simply click on the clip and switch from the Edit page to the Color page. When you’re done with that, simply switch back to the Edit page and off you go.

Resolve 11 doesn’t stop there, since who but a dedicated indie works all alone these days? Resolve has introduced brand new collaboration tools that allow multiple colorists to work on the same project while being able to see the timeline get updated immediately by the editor. If you’re working on a large motion picture or TV production, this is an incredibly useful feature.

Conclusion

As I mentioned, look for my more detailed review of DaVinci Resolve 11 coming soon. There I’ll dive into the different editing, trimming, mixing and transitioning functions. I’ll give my thoughts on whether Resolve 11 has what it takes to be a truly professional NLE.

As usual with Blackmagic Design, there’s more happening at one time: the well-respected compositing app Eyeon Fusion, recently purchased, is already available for download on the Blackmagic website. For free. Fusion is a remarkably powerful and capable node-based compositing and visual effects platform and an attractive alternative to users of other compositing and VFX apps.

Blackmagic Design has always been aggressive when it sees openings, lapses in other company’s products, or the chance to jump ahead by buying and building up another company’s complementary product which they’ll go about enhancing with characteristic flair. The result is a compelling family of products, from the camera through to all stages of post.

Things are indeed getting very interesting.

 

Original article: Blackmagic Design’s DaVinci Resolve 11 Adds NLE, Competes With Other App Suites

©2015 NYC Production & Post News. All Rights Reserved.

The post Blackmagic Design’s DaVinci Resolve 11 Adds NLE, Competes With Other App Suites appeared first on NYC Production & Post News.

Viewing all 27 articles
Browse latest View live