Java Developers: Question about nativity and efficiency.

Just for fun and to keep my skills sharp, I created a derived class of Math which contains methods that Math doesn’t. Since the Math class uses native mathematical functions, I feel like I’m bogging down the program by calling the “soft-coded” Math class. For example, here’s my original method for calculating the standard deviation of an array:


public static double stdDev(double[] a) {
        double s = mean(a);      // mean() is a method in this class
        double[] t = new double[a.length];
        for(int i = 0; i < a.length; i++) {
            t* = Math.pow(a* - s, 2);
        }
        return Math.sqrt((long) sum(t) / a.length);
}

Could the above code be made more efficient if I used my own native method of pow() and sqrt()?


public static native double pow(double a, double b);
public static native long sqrt(long a);
   
public static double stdDev(double[] a) {
        double s = mean(a);
        double[] t = new double[a.length];
        for(int i = 0; i < a.length; i++) {
            t* = pow(a* - s, 2);
        }
        return sqrt((long) sum(t) / a.length);
}

This is my first time experimenting with natives. Seems to me like it would be faster to call the C-coded pow and sqrt rather than going through the Math class.

java.lang.Math (under the hood) is already written in “native” code so you wouldn’t gain anything unless your custom implementation uses exotic CPU instructions that that standard package does not. (For example – you wanted to use the latest Intel cpu floating point instructions.) If you’re simply going to write your own pow() and sqrt() like the sample code you might find in a college text book, you’re not going to beat the standard java libraries.

Benchmark it by running each a few million times. I bet your version is slower.

I understand java.lang.Math is already written in native code, hence my question. I thought it might be faster to skip the step of calling the Math class.

Hotspot compiles Math.sqrt to a SQRTD instruction on x86 machines, so it’s unlikely you’re going to be able to beat the processor’s native instruction with any code of your own.

For the second part of your question, AFAIU, the answer is no. All methods in java.util.Math are static, so there’s no vtable machinery to set up that’s usually associated with classes, and the compiler should be producing the same code as if you’d called the native functions directly.

As I suspected, Javac produces exactly the same bytecode in both circumstances:

Compiling this:



public class test {

	public static void main(String[] args) {
		double x = Math.sqrt(5);
		System.out.println(x);
  }

}


Produces this:



Compiled from "test.java"
public class test extends java.lang.Object{
public test();
  Code:
   0:	aload_0
   1:	invokespecial	#1; //Method java/lang/Object."<init>":()V
   4:	return

public static void main(java.lang.String[]);
  Code:
   0:	ldc2_w	#2; //double 5.0d
   3:	invokestatic	#4; //Method java/lang/Math.sqrt:(D)D
   6:	dstore_1
   7:	getstatic	#5; //Field java/lang/System.out:Ljava/io/PrintStream;
   10:	dload_1
   11:	invokevirtual	#6; //Method java/io/PrintStream.println:(D)V
   14:	return

}


Compiling this:



public class test {

	public static native double sqrt(double a);

	public static void main(String[] args) {
		double x = sqrt(5);
		System.out.println(x);
  }

}


Produces this:



Compiled from "test.java"
public class test extends java.lang.Object{
public test();
  Code:
   0:	aload_0
   1:	invokespecial	#1; //Method java/lang/Object."<init>":()V
   4:	return

public static native double sqrt(double);

public static void main(java.lang.String[]);
  Code:
   0:	ldc2_w	#2; //double 5.0d
   3:	invokestatic	#4; //Method sqrt:(D)D
   6:	dstore_1
   7:	getstatic	#5; //Field java/lang/System.out:Ljava/io/PrintStream;
   10:	dload_1
   11:	invokevirtual	#6; //Method java/io/PrintStream.println:(D)V
   14:	return

}


My answer is based on my experiences 5 years ago, and with Sun’s JDK which was 1.4.2 at the time (if I remember correctly).

I created my own integer alternatives to the trig and sqrt functions for a simulation - these alternatives were very approximate, precision was very low - but they were orders of magnitude faster than Java floating point at the time.

Were they orders of magnitude less precise? :wink:

Absolutely. But for the limited range of values (18 bits to the left of the “decimal” point and 13 bits to the right), and for the limited number of operations per value - they were quite effective.