Many theories at the leading edge of physics make predictions for new subatomic particles with specific properties. Unlike the subatomic particles that most people are familiar with - protons, neutrons and electrons - these particles are not “stable”. They decay into other particles in fractions of a second. So, you can’t look for these particles in ordinary life or ordinary matter, you have to build special machines in order to produce them and then gather evidence that they exist, before they decay.
A particle accelerator like the Large Hadron Collider accelerates ordinary particles (protons, in the case of the LHC) to very high velocities, and then collides them head on. The collision results in a shower of debris as new particles are produced directly from the kinetic energy of the colliding particles, according to E=mc^2.
An equivalent form of the same equation is m = E/c^2, and this says that the mass of the particles that can be produced scales linearly with the energy in the collision. This means that particle accelerators at low energies will only produce particles with low mass. Humanity has already fairly thoroughly explored the particles that have been produced in accelerators with energies less than approximately 1 TeV.
The LHC is the first accelerator we’ve built with energies above 1-2 TeV (max design energy for the LHC is 13 TeV I think), and so people are looking to see whether any new particles are appearing at these new, high energies. These results will provide experimental evidence to confirm or refute various competing theories attempting to resolve unanswered questions about particle physics, general relativity, cosmology, etc.
However, actually looking for these particles is fairly subtle, for two reasons. The first is that these particles interact with normal matter only very weakly (which means they are very difficult to detect in the first place), and usually decay much faster than they could be detected even if this were not the case. So, particle accelerators involve large detectors that detect essentially just the particle decay products that we are able to easily see. Physicits calculate what spectrum and properties of these decay products would result from the theoretical particles they’re looking for, and then determine if this spectrum matches what they actually see in the experiment. So, detection of the actual particles they’re looking for is very indirect.
The second reason is that particle physics and quantum dynamics is probabilistic. There is no way to calculate whether a specific event (e.g. the production of a particle) will definitely happen, you can only calculate the probability that it will happen. And, the probability of production of most particles that physicists are now looking for is very small. This means that you have to perform a lot of collisions in order to get a statistically meaningful result that says you either have produced, or have not produced, a specific particle. And you can never be absolutely sure that your conclusion is valid - you can only state that the chance you have made an error is very small.
If you have ever studied probability, you will know that when drawing conclusions, your chance of making an error is higher when you have a smaller number of samples. If you are trying to determine if a coin is biased, 4 flips out of 6 that land on heads is not very good evidence, but 40 flips out of 60 is much better, and 4000 flips out of 6000 is quite convincing.
This is essentially what happens at the LHC occasionally - they perform a few trillion collsisions, analyze their data, and they see a slight statistical “bump” in some data signal from the detector that is consistent with some new particle that someone has predicted. They get excited about this result, but they know that they have not performed enough collisions to state with certainty that it is real - it might still just be a statistical fluke. When they perform additional collisions, the “bump” either disappears (because it was just a statistical fluke), or is confirmed (because it is a real discovery, e.g. the Higgs boson).
The standard for “getting excited” about a result on the LHC is a certainty of 3 sigma, which basically means there is 1 chance in 300 that the result is just a statistical fluke. The standard for announcing a confirmed result (the actual discovery of a particle) is 5 sigma, which corresponds to roughly 1 chance in 3.5 million. Given the number of different experiments going on at the LHC, it is fairly expected that they will have a few 3-sigma false alarms during the life of the project.