I’ve got a Canon 40D, which has roughly 10 million effective pixels.
I would like to have an image blown up to fit on a 12 foot by 10 foot wall at at least 300 DPI.
120 square feet.
120 X 144 = 17280 sq. inches.
17280 X 300 = 5.184 million total dots/pixels.
So, knowing that, I could push the DPI up even further to almost 600 to get the best image quality, correct?
300 dpi is 300x300 pixels / sq. inch, so 17280 * 300 * 300 = 1.55G pixels.
But, that’s all pointless anyway. What is your viewing distance? That will determine your necessary resolution.
Also, you can’t get blood from a turnip - you will never get more resolution than your 40D can supply. Upscaling just makes the pixel boundaries smoother.
Dots are not pixels, so DPI doesn’t translate to pixels per inch. Depends on the type of printer, but it typically takes about 10 x 10 dots to represent a pixel.
So if we consider 300 DPI to be equivalent to 30 PPI, 12 ft x 10 ft requires (12x12x30)x(10x12x30) = 15 million pixels.
Further complication is that a 10-megapixel camera doesn’t have 10 million color pixels. It has 5 million green pixels, 2.5 million red pixels and 2.5 million blue pixels. It may produce a 10-megapixel image file, but there is some interpolation involved. The true, effective resolution of the resulting color image is closer to 5 million pixels.
But all this is to say, a 300 ppi, 10’x12’ printout has more than enough resolution to show all the detail in a 10-megapixel camera output. At 150 ppi it may be marginal.
That’s very close for such a big image.
I would suggest that you work with your printer for best results. Depending on the subject matter, so selective editing (like local sharpening of the eyes in a portrait) might be all you need.
But, until you work with the printer, don’t bother to up-res the image - they will probably do that as needed.
Be aware that most large-format printers are not continuous-tone devices. They use diffusion dithering to create their full palette of colors, so creating a high-res image is often a waste of time, because the spatial data is smeared out in the printing process. I once did an experiment with an HP “600 dpi” plotter, and found out that there was absolutely no visible difference between a 50 dpi file and a 600 dpi file when plotted at 36" wide.
There are tools like http://a-sharper-scaling.com/ that can help, but 300 ppi on a 10’x12’ print isn’t typically needed, 30"x45" is as large as you can get with 10MP before falling below the 80ppi which most people will find typically acceptable for close viewing distances.
11"x16" is as big as you can go at 300dpi for 10MP, but at 20′ viewing is 15dpi is enough for acceptable results.
I have a 24"×36" print in my living room from a 3.35MP camera and it looks great, but even with my GFX which is 51.4 MP can only be ~60" on the long side before it drops below 300dpi.
Note I have to stay at really wide apertures and use weighted tripods and remote releases to even get close to that level of detail. Even with a much larger sensor I am diffraction limited at f/11 for the GFX, and on My D800 which is 36MP in 35mm format I have to be below f/8 before diffraction limits resolution.
Looks like I could get by with 50 DPI at 6 feet, per that link, thanks beowulff. But there isn’t much hope for the 40D to capture enough data to cover 120 square feet.
I guess it depends on how we’re looking at this. I can certainly tell the difference in detail between a 15MP file and a 36MP file printed up at sizes starting at around 13 inches by 19 inches. I have art photographer friends who routinely print large photos (like around four foot by six feet) in the hundreds of megapixel to gigapixel range and the detail is just incredible up close and easily distinguishable from something 15MP. But the “viewing distance” part is the important one. Most people do overestimate how much resolution they need for an acceptable print. For the OP, the print will look very obviously pixellated at 8"x10" viewing distances. For 6’ viewing distance, so it doesn’t look pixellated, you want something around 40-50 ppi (there’s a rule of thumb around that says 3438/viewing distance in inches is how much ppi you want as a minimum. That seems about right to me, in my experience, but I haven’t tested it empirically.)
One thing you can do (if you haven’t already taken the image), is to do a pano image.
I’ve generated enormous images from a series of overlapping frames, stitched together with Autopano Giga. It requires a stationary subject and some time, but the results are amazing.
If you wanted to, and the subject is appropriate, you can always try doing a panoramic stitch. So, if you’re normally using a 24mm lens to get a wide angle shot of something, you might try using a telephoto lens and capturing, say, a 2x3 or 4x6 grid of images and then stitch them together in post. Now you have around 60 or 120 megapixels to work with (well, effectively a good bit less than that, as there will be significant overlap.)
ETA: ninja’d by beowulff. There are also techniques for effectively increasing resolution by taking many photos (like a dozen) handheld and then aligning and combining them in post, kind of, sort of in a similar manner as forensic video experts could combine multiple frames of a video where you can’t quite make out the license plate number into a single frame where it’s legible.
Yes, 300DPI is typically for, say, 8x10 held a foot or two from your eyes. Scale from there. 12Mp is 3000x4000; which implies a 300dpi image about 10"x13.3"; 24Mp (my current camera) would give a 4,000x6,000 image, or 13.3"x20". My first digital camera was 2.4Mp and still produced detail I described as “painfully sharp” on a 4x6 snapshot. (Compared to those fuzzy low-end film cameras I was used to…)
The only reason to have something sharper is if you are looking for detail in each segment, like a big blueprint - you want the detail to still be sharp on a 5-foot paper, you want to walk up to your mural recognize people in a crowd scene, or sharp text, etc. But - unless you provide a stepladder, that resolution is wasted for at least half the picture. a 10x12 from 6 feet away you need 1/6 the resolution of a 300dpi 8x10, so 50dpi instead of 300. 10’ is 120" so 50x120=6000pixels. similarly, 12’=144" so 7200 pixels. 6000x7200 =43Mp about the resolution of a top-end camera today.
In fact, I suspect a good top-end consumer camera (like my Canon, 24Mp) is more than adequate for the task. As others point out, beyond that stitching photos is a good concept - but if you want results that stand up to very close scrutiny, you need a tripod and careful stitching. Unless you are going for the NASA mosaic look…
But yes, print out a fragment ( a page, or glue together several) at the desired resolution and see how it looks from the appropriate distance. I’ve blown up my 2.4Mp photos to 8x10 easily and even 11x14 and they still look good. That’s about 150dpi viewed from 2 feet away.
Should also point out that a 65-inch 4K TV is typically viewed from, say, 8 feet away. That’s 3840 pixels about 4 feet across. That’s 80dpi (3840/48) and a 4K TV still looks damn good!
(Actually, I use a 4K 43" TV as my computer monitor. It’s like having 4 regular monitors glued together. I sit back maybe 2 to 3 feet.)
Here is an example 16209 x 4119 version I made just to test a flickr scaling problem (they changed their method a few years back and broke the service for me)
WARNING, this is a huge image, and of no commercial or artistic value.
Really for commercial work you need to care about your nodal point to avoid parallax but software is pretty good for casual uses. But any photoshop newer than CS6 does a really good job if you don’t mind issues like the leaning space needle in that photo.
At least software doesn’t tear without lots of attention to avoid parallax these days if you just use a tripod.