Setting the scene - Feb 1999
DPReview.com's test scene is probably the most recognized feature of our reviews. Other tests have come and gone as the need for them has ebbed and flowed, but almost every one of the site's reviews has included some sort of test scene.
And it's been at the heart of one of the greatest criticisms of the site: 'Oh, they only shoot test charts.' (This despite the real-world sample gallery that accompanied every review and informed all our IQ assessments.) Beyond any other feature, the test scene is what allowed the site to become DPReview.
Here's where it all started: Phil Askey's review of the Kodak Pro DCS 520, conducted in 1999 when the site was based in Singapore. It was the site's fourth review and of a camera that clearly blew Phil away. In it, he shot the Kodak alongside the Canon Pro70, to establish how the two compare. And it's that need to compare that would eventually prompt the creation of the test scene.
However, it would take several years and many reviews before the full ability to compare arrived.
The test poster - Aug 1999
Phil didn't spot the utility of shooting the same target, repeatedly, straight away. Some of the bottles cropped up in other reviews (particularly the fine print on the Martini label), and Phil experimented with a variety of setups, including more bottles, cuddly toys and various other targets to keep challenging the cameras he was shooting.
At the same time, he started to include test images of a poster that included a variety of challenging subjects. A series of portraits was later added below the poster in order to assess skintones. In all these early reviews, this poster and various ad hoc bottle setups were used to compare cameras, but notably, Phil was having to re-shoot the new scenes with each camera or lens every time he wanted to make a comparison.
The 'standard test scene' (of sorts) - Feb 2000
It was the Canon PowerShot S20 review (Feb 2000) that first used the phrase 'Standard Test Scene.' It didn't turn out to be quite as described (the review stresses that the comparison shots were taken within minutes of one another), but the germ of the idea is clearly there.
Another big change happened around this time. Just as the test objects kept changing, suddenly so did the content of the sample galleries. After the S20 review, Phil (and therefore, DPReview) had moved from Singapore to London.
Other tests - April 2000
The Nikon Coolpix 990 review of April 2000, as well as being one of the first reviews to reflect Phil's change of scenery, also showed the arrival of several other tests that would become central to his reviews. The white balance test (using a Kodak color chart), a test of the built-in flash and a resolution test chart all make their first appearance in the same review. All three would still be part of the review process seven years later, when I was being trained to conduct the DPReview tests.
Back to the bottles - July 2001
Phil continued to experiment with test targets over the next few months, replacing bottles with items like a watch and a set of pliers placed across the former test poster, the focus and depth-of-field challenges of which make me anxious to even contemplate. But with the Sony DSC-F505V review in July 2001 and then the Epson PhotoPC 3000Z reviews, the bottles returned.
Throughout this period you could see Phil settling on his preferred test subjects: a watch dial for fine texture, Crayola wax crayons for color rendering. The test scene itself gained a Kodak color chart and a variety of other elements, but what was still missing was consistency.
The concept is there, but elements shift around and camera position changes enough that the comparisons needed to be re-shot for each review. |
Consecutive reviews would occasionally have the same test scene shot side-by-side, but then the elements would change again, the scene would be shot from a different position, etc. It seems amazing now, but even as the reviews gained a permanent section called 'Compared to...' Phil was having to re-shoot both cameras every time he wanted to make a comparison.
A fresh approach - May 2001
By March 2001 the arrangement had arrived at the four bottles that would be central to the scene for the next four years. But it took until May of that year for the scene to settle down and include most of the elements that would still be present when I joined in 2007: the Kodak color and greyscale charts, the blue 'Paul Smith' watch, the batteries, playing card, and Martini and Baileys bottles.
That version of the scene would persist broadly unchanged for the next three years. I say 'broadly' because at least one element kept changing: the fresh flowers placed on a vase towards the right of the scene.
This organic element had the advantage of providing an excellent, lifelike point of comparison, but also meant that scene was still always changing, now to the point of being seasonal. That in turn still meant regular re-shooting of old cameras for direct comparisons, but also meant there was enough consistency that the existing work wasn't going to waste each time.
The first interactive comparisons - Apr 2004
It was during this era that Phil built the first system that allowed side-by-side comparison without having to download each full image separately. He tested four 8MP cameras side-by-side and allowed users to dynamically compare them to one another in their respective reviews. It's an approach Phil would use again over the coming year to compare rival models.
An image map let you click on different parts of the scene, which summoned-up pre-cropped images and combined them, letting you evaluate each of the comparison cameras' image of each location. Unfortunately this early comparison system was built using Javascript, which the website doesn't now support, so you'll have to just imagine how it would work.
Consistency at last - Jan 2005
True consistency finally arrived in 2005. A simplified version of the scene first appeared in the Minolta DiMAGE A200 review in January that year. Perhaps not coincidentally, this is a few months after Simon Joinson started writing reviews for the site: the first time a name other than 'Phil Askey' had appeared on the masthead.
This version of the test scene would remain unchanged for almost five years and was the version in use when Lars Rehm and I joined DPReview in late 2007. To make comparisons, you had to manually crop specific sections from the scene and assemble them in vertical strips.
If the crops hadn't been prepared for the camera we wanted to compare with, or we couldn't find them on the server, we had to track down the original images from the comparison camera and re-crop and position the crops. But, critically, the scene was shot and lit consistently enough that you didn't have to re-shoot every camera you wanted to draw a comparison with.
This is the main benefit of shooting a consistent scene under consistent lighting: you can create a vast database of directly comparable images without finding that the weather or alignment has changed since your last review.
The last 'box' shot - Oct 2009
The final version of the test scene to live in its white wooden box came in late 2009 and represented a significant overhaul. It gained a great many fine detail targets added to really challenge the latest high-res sensors. There was also a dark box with colored cotton reels in it, to challenge the cameras' noise reduction algorithms.
Perhaps just as significant were the changes aimed at increasing our consistency of framing, now that there were many more people 'behind the scene' shooting it. Look at the center of the scene and you'll see there's a pin (placed in the plane of focus), the point of which, when the camera is positioned correctly, should line up with the center of a cross on the back wall of the box. If the camera was too far left or right, or high or low, it didn't quite align.
And, while there's a Siemens star just below it, it was the tree painting on the front of the Bailey's bottle we used for focusing. To cope with the 3D nature of the scene we shot cameras at F9, well into diffraction-limited territory for some of the smaller-sensor cameras we were testing. The scene was also very difficult to shoot with short focal lengths, and I wish I could find the photo of Lars with half of his body inside the test scene box, carefully trying to align the Sigma DP1, inches away from the targets.
This scene started its life in London and was carefully shipped to Seattle in late 2010, meaning the rancid Baileys had to be shaken from the bottle after many years under hot lights.
The first comparison tool - June 2010
The first full comparison tool was launched in June 2010. The new system meant there was no need to crop or prepare image strips. But it did mean we had to go back and re-shoot a fair number of older cameras, to make sure there were products to compare to, right from the off.
Note there's even a check box that would let you jump straight to the regions of the scene we'd previously shown crops of. We've no idea if anyone ever used or even noticed it.
However, there was no online interface or back end: instead to make it work we had a bespoke processing and tagging tool that would create all the thumbnails and tile sections, generate the index and metadata files containing all the image commentary, and arrange everything into the correct folder structure. These folders needed to be manually uploaded to the correct place in the live site, so that a script that ran at regular intervals would find the folders and add them to the list of comparable cameras.
This final step would only occur if the script found a file called 'visible.ini' in the correct folder, so we had to be incredibly careful not to upload this file if we were preparing a test with a product that wasn't public yet.
By the time the scene was replaced, the comparison tool contained 2973 images from 198 cameras.
The new test scene - Sept 2013
The most recent test scene made its first full appearance in September 2013 (though it had sneaked into at least one review while the interactive tool was being finalized). This was the most radical reworking of the scene, and attempted to combine the detail, color and resolution assessments into a single chart.
The inclusion of the resolution targets meant we couldn't arrange it as a multi-aspect-ratio scene (the height of the scene had to be consistent to interpret resolution in terms of lines-per-picture-height). As well as saving us from having to constantly re-shoot wax crayons and resolution tests, the new test scene had a significant benefit: it was much larger and much flatter. The scene is precisely 1m (~39.7") tall, making it around 20 times larger than the box shot that preceded it.
This made alignment much, much easier and eliminated any concerns about depth-of-field. We though long and hard about incorporating some of the familiar elements of the existing scene into the new one, to maintain continuity, but the vastly different scale and the need to keep the existing scene intact and shootable while the new scene and comparison tool were being completed meant a complete break with the past. I regret not including a Martini bottle, for old time's sake.
With hindsight I also wish we'd adopted equivalent apertures for our test shoots, rather than using F5.6 regardless of format. That said, there was significant reader angst about our choice, despite them all being excellent by the time they're stopped-down to F5.6, and I don't regret not spending a decade trying to explain why Micro Four Thirds cameras were being shot at F2.8 instead.
The second comparison tool - Sept 2013
The second test scene might look a lot like the first one, when you use it, but it revolutionized the way the site worked in the background.
It was the first time we had an online interface to a proper database into which the images were loaded. We could batch-apply metadata to define the properties of images (e.g., to mark a batch of images as having been shot using the 'Daylight' lamps) or to add the public-facing test about the lens and camera settings that had been used. The system could also extract some of this information from the files' EXIF fields.
This sped things up enough that it freed-up time to shoot every ISO in two different lighting conditions.
The way this code was designed proved to be impressively flexible: you could pick any set of images you wanted to compare right down to the pixel level, create a 'Scene' with the relevant variables for your comparison, then create a series of 'anchors' to ensure the images stayed aligned as you navigated around, even if the pixel counts were radically different.
Also added was the ability to ask the site to re-process the comparisons to match the lowest selected resolution. This FINALLY released us from the constraints of pixel-level analysis of all images and let us properly tell the story of image-level differences and the impact of sensor and pixel size.
A few years into the new scene we adopted the approach of shooting all Raw files with identical exposure parameters, regardless of the manufacturers' tone curves and ISO ratings, allowing a visual assessment of Raw performance that was more consistent than ever.
The end - Mar 2023
With the addition of the Pro70 to our studio scene, yesterday, we've come full circle. From what I can tell, the final test scene represents just under a decade's worth of cameras, and includes 11,307 separate images from 296 cameras.
The custom code that underpins the comparisons, not to mention the incredibly clever back-end programming and the upload and metadata-tagging tools, unfortunately are what make it very difficult to maintain, once the site closes. In its own strange way this stands as testament to the brilliant development work that went into creating the part of the test scene that was easiest to overlook: the comparison tool itself.
Even with the larger scene, it could be a pig to shoot, and the comparison tool was so effective that it meant you could compare images with finer precision than we could repeatably focus the scene with.
Alas, nothing we could do would stop the portraits from fading under the intense lighting; there are so many such aspects that I'd do differently if I were starting again. But do I regret the countless hours I spent framing, white balancing, focusing, processing, uploading, tagging so many of those eleven thousand images?
Actually, I'd prefer not to answer that.
from Articles: Digital Photography Review (dpreview.com) https://ift.tt/UDFWwg4
via IFTTT
0 comments: