Recently I’ve been dabbling in how various things are compressed, and I’ve found some quite interesting. For instance, I love how people use fractals in images, giving them a practical purpose (now all we need is for it in text! ). Another type of compression algorithmn I came across was based on neural networks, and I can honestly say I don’t have a clue how they work. Can some educated Doper try to explain it to me? Use fairly simple terms, my programing experience petered out somewhere at QBASIC, and I’ve only taken High School math courses. Thanks!
I’m going to bump this just once, as I think some may have missed it…
Neural Networks are especially good in “granular applications”- that is to say applications where the problem can be broken down into pieces that can each be worked separately.
Problems that are not granular can be very sequential - each step depending on the previous step - they tend to not be good problems for neural nets.
Problems like particle analysis, materials simulation, weather simulation, etc. are good because you can drop a grid (or cubic grid) over the item to be analyzed, give each cell of the grid to a processor segment and allow each to be analyzed separately in parallel. Where each grid segment interacts with another segment, information is exchanged between computing nodes.
Therefore, since an image can easily be segmented, it is fairly easily brought into a neural net process.
This is a bit of a simplification but it should be enough.
-B
Basically, a given neuron has several inputs, and an output. With each input is associated a weight. 1’s and 0’s come along, and you multiply the weight times 1 or 0 (as is appropriate) and add them together. If the resulting sum is greater than a preset threshold value, you output a 1, otherwise you output a 0. Systems are designed by modifying the weights.
Here’s a good NN explanatory site: