default float datatype in python and cython -


i ran interesting/weird or not able figure out/comprehend.

here is,

code - 1: python3.5

a = 0.0; print(type(a)); in range(10000):     += i; print(a); print(type(a)); 

output:

<class 'float'> 49995000.0 <class 'float'> 

code - 2: cython0.26

%%cython cdef float = 0; print(type(a)); in range(10000):     += i; print(a); print(type(a)); 

output:

<class 'float'> 49992896.0 <class 'float'> 

i running debian stretch (64 bit): ran codes in jupyter notebooks after loading %load_ext cython.

i assume, default python float64, since run 64bit os , not explicitly setting use float32. assuming cython inherits float datatype python.

why outputs different? , difference seems large large iterations - , makes me think truncation/rounding off of trailing bits. explain reason behind , how avoid?

edit: question has been asked goal of understanding difference between datatypes , hence applicable wider audience.

in c float ieee single precision, aka float32. while float64 ieee double precision. try:

cdef double = 0; 

Comments

Popular posts from this blog

ubuntu - PHP script to find files of certain extensions in a directory, returns populated array when run in browser, but empty array when run from terminal -

php - How can i create a user dashboard -

javascript - How to detect toggling of the fullscreen-toolbar in jQuery Mobile? -