Why are Pixels square?

Could they be made in different shapes and sizes? Would this help with resolution and clarity when zooming?

Pixels aren’t square. Pixels are individual units. They have no inherent shape.

An image is just an array of numbers. It might be something like this:

1 1 1 1 1 1 1 1 1
1 1 2 2 2 2 2 1 1
1 2 1 1 1 1 1 2 1
1 2 1 2 1 2 1 2 1
1 2 1 1 1 1 1 2 1
1 2 1 2 2 2 1 2 1
1 2 1 1 1 1 1 2 1
1 1 2 2 2 2 2 1 1
1 1 1 1 1 1 1 1 1

What is the shape of each individual number? There isn’t one. Now, if you were looking at this or editing it in some kind of image program, as you zoom in it’s probably going to render each individual unit as a square. That’s just because it needs to draw some kind of boundary between each pixel, and it’s easiest just to lay them out on a square grid.

When the image gets sent to your monitor, it’s the same kind of thing. Each individual pixel unit gets displayed at a specific point on the screen. That “pixel” on your screen could be three individual color thingies (how’s that for a technical term) right next to each other, one for red, one for green, and one for blue. Those little parts might be square-ish, or they may be more rounded, depending on the construction of your monitor. If you look at a monitor screen very closely though, it’s not a grid of perfect squares with no space at all between them. It’s actually broken up into smaller discrete units.

There’s no reason your rendering program couldn’t draw the individual pixels as little circles as you zoom in, but circles have space in between them. What do you do with that empty space? Or do you overlap the circles? Then what color do you use for the overlap? Squares are just easier.

Best to not have any gaps between the pixels.
Best to have the light produced equally across a large pixel than to have a tiny dot in the middle glowing really brightly.

Also, for ease of the analog being displayed on a CRT…
While the CRT had lines, the CRT doesn’t actually draw things pixel by pixel across the screen. It was the source that was flipping the signal from one colour to another…
What that means is that there are no discrete pixels across the line… it was just a continuous waveform… I remember that at some modes the lines wouldn’t be totally jammed up next to each other, and there was small gap visible … it looked horrible

What if it were a combination of shapes to fill in the gaps? Maybe circles combined with four pointed stars to fill in between?

Pixels ARE square, or at least rectangular and most often square or close to it. Conceptually they are points spread over an image to represent it, but in practice they are almost always spread in a rectangular array, and usually one with equal or nearly equal spacing on the two axes. And, often, the way they are filled out is an attempt to maximize illuminated or printed pixel area as a fraction of total area, which makes them square.

And pixels don’t need to be square or, more particularly, arranged in a rectangular array. It would just be a lot more work to make them otherwise.

But it would be better in some ways to go to the trouble of doing this. If they were arranged more like the stones in a typical stone wall, or like the patches of dried mud between cracks in what was the bottom of a now dried up lake, certain artifacts of rectangular pixel arrangements would go away. There would not be Moire patterns, and you would not see pronounced “jaggies” on lines that were slightly off of vertical and horizontal. Pixelization would not confound itself with vertical and horizontal details of the image.

Who wants to design a video card for Penrose tile pixels?

There’s a neat, relevant Wikipedia article on “Tessellation”, which is the tiling of a plane with space filling shapes of one or more forms.

Each pixel can only have ONE colour, if you make it round, there will be “white” between them.

For printers it’s round, but then called dpi (dots per inch).
The more dots you have in the predefined area, the more details you have, since each dot is smaller – the area is defined and limited.

Pixels are not defined by size, since they are a definition of what colour a specific area has.

You can display a 100x100 pixel in an area of half an inch or 2 meters.
Or let’s say a 1366x768 pixel screen can be 10.1”, 14.1”, 15.6”, 17.3” or 42”.
The picture will look sharpest the smaller the screen is, since these 1049088 individual pixels are way smaller, since each individual pixel reproduction item (let’s say LED for a screen or bulb) is physically smaller.

Other layouts of pixels have been tried (especially if you include the phosphor dot geometries of CRT displays - which, whilst not exactly pixels, are closely related enough to warrant discussion in the same topic).

For example, the LCD display on the XO-1 One Laptop Per Child device were roughly diamond-shaped - although still laid out in a grid.
http://en.wikipedia.org/wiki/OLPC_XO-1#Display_resolution

And other displays (especially OLEDs) employ different layouts - such as PenTile:
http://en.wikipedia.org/wiki/PenTile_matrix_family

This would be pointless and computationally very expensive.

For the best display, they’d be hexagonal, but that would be computationally more expensive.

Instead, they just keep making them smaller, and at this point it hardly matters what shape they are (e.g., on my 15" laptop screen that’s 1920x1080).

Only when printing (on white paper). On a (black) screen, there would be black between them.

It’s not a million miles different from the PenTile Matrix 2 detailed here:

How about Fuji Super CCD?

That’s the reason I used theses “” marks, when I used the term “white”.
Also when printing, it’s not pixels anymore - its dpi on the paper. You can print a single pixel in whatever size your printer can print.

A pixels is ONE unit containing one single colour information, but is not defined by size.

The size of the pixel is defined by the “displaying” unit.

Within an image files pixel are square, but you can display ONE single image pixel over several display pixels on a screen, but each single pixel on a screen can only produce ONE single colour.

What the form or build factor of your screen is another matter, see picture in the link – you can see individual pixels via pixilation with pixels, but the pixels of this LED screen are not square.

That might be interesting if there were any explanation. Could they be trying to exploit the fact that our eyes can resolve blue less well than other colors?

There’s a bit more explanation here. I think what they’re trying to achieve is the possibility of subpixel rendering that works in both horizontal and vertical directions (current subpixel rendering on LCD screens - e.g. Windows ClearType only improves the horizontal resolution)

I knew a guy who worked on a R&D project with a small company to develop exactly this – CRT or similar display, with pixels in a hexagonal grid.

That was several years ago. Last I heard (I think), the company was getting into other things, and I guess the hexagonal project never came to anything.

That’s exactly what the Wikipedia page says, so yes, that’s at least what they claim. I don’t think the ratios are optimized for our retina, though.

CRTs used two main technologies: aperture grille and shadow mask.

Side note: Wikipedia has a picture of the aperture grille Trinitron TV that I used to play NES on. I had completely forgotten about it.

Even if a hex grid (or a triangular grid or Penrose grid or whatever) were actually proven to be optimal, it’d still have to contend with the fact that we’ve got a whole lot of computational infrastructure already invested in rectangular pixels (like, for instance, every image format ever). If you wanted to move to some other grid, you’d have to either replace all of that, or come up with some efficient way of converting everything on the fly.

I thought PenTile was solely to compensate for green or blue sub-pixels in OLEDs not lasting as long, so they put more of those in?

You COULD make pixels of different shapes, but they don’t because it’s just cheaper to make them smaller instead.