double - C - erroneous output after multiplication of large numbers -
i'm implementing own decrease-and-conquer method an.
here's program:
#include <stdio.h> #include <math.h> #include <stdlib.h> #include <time.h> double dncpow(int a, int n) { double p = 1.0; if(n != 0) { p = dncpow(a, n / 2); p = p * p; if(n % 2) { p = p * (double)a; } } return p; } int main() { int a; int n; int a_upper = 10; int n_upper = 50; int times = 5; time_t t; srand(time(&t)); for(int = 0; < times; ++i) { = rand() % a_upper; n = rand() % n_upper; printf("a = %d, n = %d\n", a, n); printf("pow = %.0f\ndnc = %.0f\n\n", pow(a, n), dncpow(a, n)); } return 0; }
my code works small values of , n, mismatch in output of pow() , dncpow() observed inputs such as:
a = 7, n = 39
pow = 909543680129861204865300750663680
dnc = 909543680129861348980488826519552
i'm pretty sure algorithm correct, dncpow() giving me wrong answers. can please me rectify this? in advance!
simple that, these numbers large computer can represent in single variable. floating point type, there's exponent stored separately , therefore it's still possible represent number near real number, dropping lowest bits of mantissa.
regarding this comment:
i'm getting similar outputs upon replacing 'double' 'long long'. latter supposed stored exactly, isn't it?
- if call function taking
double
, won't magically operate onlong long
instead. value converteddouble
, you'll same result. - even function handling
long long
(which has 64 bits on nowadays' typical platforms), can't deal such large numbers. 64 bits aren't enough store them. unsigned integer type, "wrap around"0
on overflow. signed integer type, behavior of overflow undefined (but still wrap around). you'll number has absolutely nothing expected result. that's arguably worse result floating point type, that's not precise.
for exact calculations on large numbers, way store them in array (typically of unsigned integers uintmax_t
) , implement arithmetics yourself. that's nice exercise, , lot of work, when performance of interest (the "naive" arithmetic algorithms typically inefficient).
for real-life program, won't reinvent wheel here, there libraries handling large numbers. arguably best known libgmp. read manuals there , use it.
Comments
Post a Comment