If dy/dx is not a fraction, then why am I allowed to multiply both sides of an equation by dx?

You can think of them as dx = an infinitessimally small change in x, and dy = the corresponding change in y (assuming y is a function of x). This is the way they thought of it in the early, nonrigorous days of Calculus, although Nonstandard Analysis is a relatively recent attempt at making this approach rigorous.

Or you can follow modern elementary Calculus textbooks that say that dy is, by definition, (dy/dx)*dx, and that this is an approximation of delta-y (the change in y) when y is a differentiable function of x and dx = a small delta-x.