Science AMAs are posted early to give readers a chance to ask questions and vote on the questions of others before the AMA starts.
Guests of /r/science have volunteered to answer questions; please treat them with due respect. Comment rules will be strictly enforced, and uncivil or rude behavior will result in a loss of privileges in /r/science.
If you have scientific expertise, please verify this with our moderators by getting your account flaired with the appropriate title. Instructions for obtaining flair are here: [reddit Science Flair Instructions](http://www.reddit.com/r/science/wiki/flair
) (Flair is automatically synced with /r/EverythingScience as well.)
EH: Thanks everyone. I'd love to answer more of these great questions. I really appreciate people taking the time to engage.
Cortana is telling me with an alert on my laptop (complete with an explanatory map--per that question on AI and explanation that I wish I had time to get to :-)) that I have to leave now for the airport to make it back to Seattle tonight. I know that Cortana is calling our predictive models (build via the Clearflow project), so I trust the inferences! Would love to catch up with folks in other forums, or perhaps in person.
PN: Thanks everyone for all the questions. I'm sorry we couldn't get to all of them. Signing off now.
A lot of research in ML now seems to have shifted towards Deep Learning.
1. Do you think that this has any negative effects on the diversity of research in ML?
2. Should research in other paradigms such as Probabilistic Graphical Models, SVMs, etc be abandoned completely in favor of Deep Learning? Perhaps models such as these which do not perform so well right now may perform well in future, just like deep learning in the 90's.
Which careers do you see being replaced by AI and which seem safe for the next generation?
I ask this as a high school teacher who often advises students on their career choices.
So many people talk about the disruption of jobs that are primarily based on driving a vehicle to the exclusion of other fields. I have a student right now who plans to become a pilot. I told him to look into the pilotless planes and he figured that it isn't a threat.
I have told students that going into the trades is a safe bet, especially trades that require a lot of mobility. What other fields seem safe for now?
How do you intend to break out of task specific AI into more general intelligence. We now seem to be putting a lot of effort into winning at Go or using deep learning for specific scientific tasks. That's fantastic, but it's a narrower idea of AI than most people have. How do we get from there to a sort of AI Socrates who can just expound on whatever topic it sees fit? You can't just build general intelligence out of putting together a million specific ones.
Hi there! Thank for doing this AMA!
I am a Nuclear Engineer/Plasma Physics graduate pursuing a career shift into the field of AI research,
Regarding the field of AI:
* What are the next milestones in AI research that you anticipate/ are most excited about?
* What are the current challenges in reaching them?
Regarding professional development in the field:
* What are some crucial skills/ knowledge I should possess in order to succeed in this field?
* Do you have any general advice/ recommended resources for people getting started?
*Edit:* I have been utilizing free online courses from Coursera, edX, and Udacity on CS, programming, algorithms, and ML to get started. I plan to practice my skills on OpenAI Gym, and by creating other personal projects once I have a stronger grasp of the fundamental knowledge. I'm also open to any suggestions from anyone else! Thanks!
I am a PhD student who does not really have the funds to invest in multiple GPUs and gigantic (in terms of compute power) deep learning rigs. As a student, I am constantly under pressure to publish (my field is Computer Vision/ML) and I know for a fact that I can not test all hyperparameters of my 'new on the block' network fast enough that can get me a paper by a deadline.
Whereas folks working in research at corporations like Facebook/Google etc. have significantly more resources at their disposal to quickly try out stuff and get great results and papers.
At conferences, we are all judged the same -- so I don't stand a chance. If the only way I can end up doing experiments in time to publish is to intern at big companies -- don't you think that is a huge problem? I am based in USA. What about other countries?
Do you have any thoughts on how to address this issue?
A lot of people worry about what they search for and say into Siri, Google Home, etc. and how that may affect privacy.
Microsoft and Facebook have had their challenges with hacking, data theft, and other breaches/influences. Facebooks experiment with showing negative posts and how it affected moods/posts and Russian election influence are two big morally debatable events that have affected people.
As AI becomes more ingrained in our everyday lives, what protections might there be for consumers who wish to remain unidentified or unlinked to searches but still want to use new technology?
Many times devices and services will explicitly say that the use of the device and service means that things transmitted or stored is owned by the company (Facebook has/does do this). Terms go further to say, if a customer does not agree then they should stop using the device or service. Must it be all or nothing? Can’t there be a happy medium?
* "Facebook Tinkers With Users’ Emotions in News Feed Experiment, Stirring Outcry" (New York Times, Vindu Goel, June 29, 2014)
* "Secret Microsoft database of unfixed vulnerabilities hacked in 2013" (CBC News, Thomson Reuters, Oct 17, 2017) http://www.cbc.ca/news/technology/microsoft-hack-1.4358025
* "Siri is always listening. Are you OK with that?" (Business Insider, Lisa Eadicicco, Sept 9 2015) http://www.businessinsider.com/siri-new-always-on-feature-has-privacy-implications-2015-9
* "Google admits its new smart speaker was eavesdropping on users" (CNNTech, Samuel Burke, October 12, 2017)
Would your companies keep some algorithms/architectures secret for competitive advantage? I know that data sets are huge competitive advantages, but, are algorithms too?
In other words, if your respective companies come across a breakthrough algorithm/architecture like the next CNN or the next LSTM, would you rather publish it for scientific progress' sake or keep it as a secret for competitive advantage?
As an ML practitioner myself, I am increasingly getting fed up with various "fake AI" that is being thrown around these days. Some examples:
), which is a puppet with preprogrammed answers, that gets presented as a living conscious being.
* 95% of job openings mentioning machine learning are for non-AI positions, and just add on "AI" or "machine learning" as a buzzword to make their company seem more attractive.
It seems to me like there is a small core of a few thousand people in this world doing anything serious with machine learning, while there is a 100x larger group of bullshitters doing "pretend AI". This is a disease that hurts everyone, and it takes away from the incredible things that are actually being done in ML these days. What can be done stop this bullshit?
Hi there! Sorry for being that person but... How would you comment on the ethics of collecting user data to train your AIs, therefore giving you a huge advantage over all other potential groups?
Also, how is your reserach is controlled? I work in medical imaging and we have some sub-groups working in AI-related fields (typically deep learning). The thing is that to run an analysis on a set of few images *you already have* it is imperative to ask authorization to an IRB and pay them exorbitant fees, because "everything involving humans in academia must be stamped by an IRB. How does it work when a private company does that? Do they have to pay similar fees to IRB and ask authorization? Or can you just do whatever you want?
What is an example of AI working behind the scenes that most of us are unaware of?