Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeWarning: invalid value encountered in double_scalars #14

Open
FrankenDeba opened this issue Feb 10, 2018 · 2 comments
Open

RuntimeWarning: invalid value encountered in double_scalars #14

FrankenDeba opened this issue Feb 10, 2018 · 2 comments

Comments

@FrankenDeba
Copy link

C:\Users\debax\AppData\Local\Programs\Python\Python36-32\python.exe C:/Users/debax/Desktop/node/linear.py
C:/Users/debax/Desktop/node/linear.py:39: RuntimeWarning: overflow encountered in double_scalars
b_gradient+=((2/N)(-x(y-(cur_mx+cur_b))))
after 1000 iterations:
nan
nan
C:/Users/debax/Desktop/node/linear.py:41: RuntimeWarning: invalid value encountered in double_scalars
new_b=cur_b-(learning_rate
b_gradient)
C:/Users/debax/Desktop/node/linear.py:42: RuntimeWarning: invalid value encountered in double_scalars
new_m=cur_m-(learning_rate*b_gradient)

Process finished with exit code 0

@AleeIbrahim
Copy link

Hi Franken,
If you're using some real non scaled data like me then you'll be facing that problem.
In lines 23, 24
b_gradient += (-2/N) * (y - ((m_current * x) + b_current))
m_gradient += (-2/N) * x * (y - ((m_current * x) + b_current))
the gradient values are increasing each time with such great quantities because of the large real values stored in the variables x and y. Hence both b_gradient and m_gradient will not be sufficient in order to have such large values stored inside them even if you converted then into int64.

So, in order to solve such a problem,
you'll need to scale your data set values so they would range in between 1 and -1 or any other small values.

@DulalBibek
Copy link

Hi Franken,
If you're using some real non scaled data like me then you'll be facing that problem.
In lines 23, 24
b_gradient += (-2/N) * (y - ((m_current * x) + b_current))
m_gradient += (-2/N) * x * (y - ((m_current * x) + b_current))
the gradient values are increasing each time with such great quantities because of the large real values stored in the variables x and y. Hence both b_gradient and m_gradient will not be sufficient in order to have such large values stored inside them even if you converted then into int64.

So, in order to solve such a problem,
you'll need to scale your data set values so they would range in between 1 and -1 or any other small values.

Thank you It worked

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants