Coursera Machine Learning Study Group

• Oct 10, 2012 · 7:15 PM
• This location is shown only to members

Stanford Professor Andrew Ng is teaching his famous online Machine Learning course once again starting August 20th via coursera.org. Let's get together once a week to discuss the course material and related machine learning concepts and applications.

We will occasionally have special guests give a short technical presentation on how they use machine learning techniques to solve real problems.

Current students, past students, and anyone with a hands-on interest in ML is welcome!

This week's topic:

7:15 to 8pm: discussion of lecture material and homework.

8 to 9pm: Praveen has kindly agreed to share with us his experience with his kaggle competition project!

I believe he worked on the best buy project, http://www.kaggle.com/c/acm-sf-chapter-hackathon-big in case you guys want to check this out before hand.

• Alexander M.

October 19, 2012

• amy w.

October 11, 2012

• amy w.

I have uploaded the spreadsheet and graphs we discussed today about impact of regularization to the file section. These are based on ex5 in the class. Let me know if you have questions. Amy

October 10, 2012

• Bill

I will be there late - but I will be there.

October 10, 2012

• Declan H.

Is the meetup wed or Thursday? The app and website calendars are different.

October 9, 2012

• amy w.

wed

October 10, 2012

• amy w.

"why sigmoid?: a feedforward network is basically a device for approximating continuous functions. One way how these functions can be approximated is by summing exponentials and logarithms... Now look at a plot of the sigmoid function. You can split that plot in 3 parts along the x-axis: 1st ranges from -infinity to -0.5, 2nd ranges from -0.5 to 0.5, 3rd from 0.5 to positive infinity. When you look at these segments, you'll notice that the 1st segment looks roughly like an exponential, 2nd is more or less linear, 3rd segment resembles a logarithm. Thus, what the neural network basically does in the hidden layer is to extract exponential, linear, and logarithmic functions of the input that can be summed together to yield the desired overall function. The summing of these functions is done by the nodes in the output layer. So the benefit of the sigmoidal is basically that it allows us to tune the weights of a node to produce an output that looks like an exponential or a logarithm."

October 8, 2012

• amy w.

I have updated the agenda, Praveen has kindly agreed to share with us his experience with his kaggle competition project!

I believe he worked on the best buy project, http://www.kaggle.com/c/acm-sf-chapter-hackathon-big in case you guys want to check this out before hand.

October 8, 2012

30 went

• Walter V.
• A former member
• A former member
• A former member
• A former member
• A former member
• A former member
• A former member

Mountain View, CA

Founded Jul 31, 2012

Organizers:

• Cogniac

Observe and Optimize

People in this Meetup are also in:

• The Hive Think Tank

8,522 Data Bees

• Silicon Valley Cloud Computing

8,654 Cloudsters

• Startup Grind Silicon Valley

7,703 Entrepreneurs

5,621 Makers

• Igniters: Stanford Entrepreneurs & Silicon Valley Founders

5,222 Igniters