-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
cast Distribution Inputs to floatX instead of tensor_variable #2233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Could you test if it pass test in https://github.com/pymc-devs/pymc3/blob/master/pymc3/tests/test_distributions.py#L827-L838 locally? The _repr_latex is using the |
Yep that one fails! How to deal with that? You could simply use the Distribution name and attribute names? |
I think we should change the function https://github.com/pymc-devs/pymc3/blob/master/pymc3/util.py#L79 |
This might work: def get_variable_name(variable):
"""Returns the variable data type if it is a constant, otherwise
returns the argument name.
"""
try:
name = variable.name
except AttributeError:
name = None
if name is None:
if hasattr(variable, 'get_parents'):
try:
names = [get_variable_name(item) for item in variable.get_parents()[0].inputs]
return 'f(%s)' % ','.join([n for n in names if isinstance(n, str)])
except IndexError:
pass
try:
value = variable.eval()
except AttributeError:
value = variable
if not value.shape:
return asscalar(value)
return 'array'
return name for some reason I cannot check out your branch. Also any idea why Travis is not building @twiecki ? |
Yes, go for it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can't figure out why it isn't building -- can you either make some of the suggested changes and push, or just git commit --allow-empty
and then push?
@@ -247,7 +247,7 @@ def _repr_latex_(self, name=None, dist=None): | |||
mu = dist.mu | |||
try: | |||
cov = dist.cov | |||
except AttributeErrir: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yowza
pymc3/distributions/multivariate.py
Outdated
self.k = tt.as_tensor_variable(shape) | ||
self.a = a = tt.as_tensor_variable(a) | ||
self.k = floatX(shape) | ||
self.a = a = floatX(a) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
total nit, and not necessary, but might be nicer to just use self.a
below, and stop referring to a
(same with Sigma
and nu
.
The test failed, because you need scipy 0.18 otherwise there is no skewnorm. The requirements have to be updated! |
Travis still not starting ...? |
https://travis-ci.org/pymc-devs/pymc3/requests
What have you done @hvasbath 😂! |
WTH???? I am sitting in Saudi Arabia-maybe my IP is suspicious ... :) |
You are on the naughty list 😂! |
Had to connect git with travis to resolve that- apparently this was a false positive of their malware detection ... |
OMG all these failed tests! Where do all these float64 casts come from? I have no time right now to look into all of these tests and whats going wrong, some of them as well assume the variables to be tensors ... not sure what are your oppinions? |
I dont think it's that bad, you just need to recast some of the parameters into tensor. |
But this is the point in the first place! We do not want to do that in order to have the random method perform fast ... |
That is surprising, @hvasbath I would imagine that the a single issue underlies most of the failing tests, so perhaps just look into one and I would hope that it would resolve the other ones. |
I know, as long as they are not tensor in the
|
…nto random_slowdown Conflicts: pymc3/distributions/discrete.py
Rebased the PR, fixed tests in variational_inference. |
This likely only needs the new fix from @junpenglao included, then also these last tests should pass. However, I cannot rebase again without getting hundreds of conflicts. Can anybody say why? |
Again what learned :) .. Merge did the trick and all the mess went away. |
self.alpha = alpha = tt.as_tensor_variable(alpha) | ||
self.m = m = tt.as_tensor_variable(m) | ||
self.alpha = floatX(alpha) | ||
self.m = floatX(m) | ||
|
||
self.mean = tt.switch(tt.gt(alpha, 1), alpha * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldnt here be self.alpha
instead of just alpha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep missed that!
@@ -438,17 +441,17 @@ class DiscreteUniform(Discrete): | |||
|
|||
def __init__(self, lower, upper, *args, **kwargs): | |||
super(DiscreteUniform, self).__init__(*args, **kwargs) | |||
self.lower = tt.floor(lower).astype('int32') | |||
self.upper = tt.floor(upper).astype('int32') | |||
self.lower = intX(np.floor(lower)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will it work if lower
being a tensor? Is intX(lower)
sufficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having theano tensor as distribution bound inputs? Is that intended? I should have known that in advance! If yes we can forget about this whole PR ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah theano tensor could be distribution bound.
In this case can you not do something similar to Uniform(Continuous)
:
self.lower = floatX(lower)
self.upper = floatX(upper)
Oh no due to the rebase and then the merge there are duplicated commits now! Argh ... How to fix that? |
@hvasbath Looks OK to me. We will do a squash-merge in any case so it doesn't really matter. |
ok then I will close the other PR again. |
Continued in the other cleaned up PR! |
Removes slowdown again:
#1824
Running the test from that thread:
[v8, v7, v6, v5, v4, v3, v2, v1]
('time for initialising population', 1.4203970432281494) previously 20s +
Also fixes AttributeError in multivariate.py line 250.
Was not sure about the Mixture Distribution as there are a lot of theano and numpy operators mixed in the random method generation. Someone having insight should fix that one ...