Thursday, April 5, 2018

generative adversarial networks (GANs) and a weird ML code-phrase

what is a GAN supposed to do? 

You input some data and the point is to output data that's similar to the input, but synthetic. You try to infer the distribution underlying your sample, and then you spit out other points from that distribution. So if you have a sample of cute doggy faces, you'd expect to be able to produce new, synthetic pictures of lots of cute doggy faces.

to set up a GAN you need:

GANs use unsupervised learning, so you don't need a labeled data set!

You do need adversary neural nets, the actor and the critic. The actor tries to mimic the true data, and the critic tries to tell the difference between the real sample points and the actor's synthetic data. Each learns in turn: First, the actor learns how to outsmart the critic (so the critic cannot differentiate between real and synthetic data), and then the critic learns how to catch the actor (ie, it learns to tell the difference between real and synthetic data), and then the actor learns some more so that it can once again outsmart the critic, and on and on.

The one thing I don't quite understand is where the start of this all is. Once the actor and the critic are mostly on track it seems like it wont be hard for them to continue, but each neural net needs the other to measure their success. So which do you train first: the chicken or the egg? And how do you do that training? Or can you really just throw in some random parameter values and expect the system to converge to what you want?

thoughts on GANs 

This whole setup just begs for some convergence theorems, doesn't it? And apparently GANs are really finicky to train... which implies that people aren't using good* convergence theorems... which could imply that good convergence theorems don't exist, but could also just imply that good convergence theorems do exists but people aren't using them... Oh, or it could imply that convergence is fine but what you end up with is just not the result you wanted. For example, maybe you need to choose a better set of features as inputs to the neural nets.

*What's a "good" (set of) convergence theorem(s)? Well, a theorem should actually work out in practice (shouldn't stability-like theorems always work in practice? that's the whole point!). That means training should finish in the proscribed time, which should be finite and reasonable. That also means the theorem should apply to real applications. A GAN maybe doesn't have to converge for all possible distributions-from-which-input-is-sampled, but if it doesn't apply to a significant** chunk of distributions then we should be able to check whether any given distribution is in that chunk. And then, of course, to be a "good" theorem, we need to know which initial conditions for the parameters lead to convergence.

**either significant in size or significant in terms of applications.

questions about norms that aren't normal to me


 The topic of convergence theorems for GANs is pretty interesting to me, but at the same time I was learning about GANs I also learned another interesting tidbit:

I heard about all this at a talk about GANs from Larry Carin (who perhaps I should cite for all this information? Larry Carin, Triangle Machine Learning Day at Duke, academic keynote talk, April 3, 2018). An audience member from industry was immediately interested in Larry's new method of setting up GANs and wanted to know if papers or code was posted anywhere, and Larry just says "it's under review" and nothing else. Well, he said "it's under review" twice when the person from the audience pushed him on it.

So, does he not put preprints on arXiv? If he doesn't, why not? And furthermore, why didn't he explain why the paper isn't publicly available? Is he worried about being scooped? (it's already submitted!) Is he worried about copyright? (Ew. Journals and conferences who don't let you offer free preprints are the worst, but usually an author will still email a copy to a person if they ask.) Is he worried the reviews are going to come back indicating major errors? (then why is he talking about the project?) Doesn't machine learning research move really fast, so shouldn't he want it out? Oooooh, maybe he has a student who's working on a problem based off this work and he doesn't want his student to get scooped. So he's giving the student as much of a head-start as he can.

To this last guess, I have:
My 1st reaction: "That's sweet of him."
2nd: "Wait, no, *tries to think of a way this impacts disadvantaged students* ... hmmmm
3rd: "I guess it's bad for the field?"
4th: "It must be hard for students in this field. Getting scooped is not fun, especially not for a dissertation. Maybe this is an acceptable protection for someone entering the field."
5th: "What if he's protecting someone who isn't just entering the field? He could be doing it so some already-established academic can get a leg up. Is that acceptable?"

Instead of falling down this rabbit hole, I'll conclude: Why is he talking about this work now if he has written a manuscript but wont make it available? Well, I guess maybe it is available in the sense that if I email him he might send me a copy. But still, why not post it on arXiv? And whatever the reason is, why wont he explicitly say it's not publicly available yet? There are a lot of norms in the machine learning community that I don't know about. Apparently one of them is that "it's under review" is code for something -- something slightly uncomfortable and thus not to be talked about in mixed company -- and I do not know what that something is.


No comments:

Post a Comment