123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665 |
- <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
- <html> <head>
- <title>Write your own Bayesian Classifier!</title>
- <style>
- .slide {
- border: 2px solid #888833;
- background-color: #AABBAA;
- padding: 2%;
- width: 94%;
- }
- pre {
- border: 1px solid #444422;
- background-color: #BBCCBB;
- padding: 2px;
- }
- </style>
- <script src="scripts/jquery-1.3.1.min.js" type="text/javascript"></script>
- <script src="scripts/slideshow.js" type="text/javascript"></script>
- </head>
- <body>
- <div class='slide'>
- <h1>Write your own Bayesian Classifier!</h1>
- <p>John Melesky
- (Open Source Bridge, June 2009)</p>
- </div>
- <div class='slide'>
- <h1>What's a Bayesian Classifier?</h1>
- </div>
- <div class='slide'>
- <h1>What's a Bayesian Classifier?</h1>
- <p>Something which classifies based on:</p>
- <ol>
- <li>Information about past categorizations</li>
- <li>Bayesian statistics (Bayes' Theorem)</li>
- </ol>
- </div>
- <div class='slide'>
- <h1>What's Bayes' Theorem?</h1>
- <p>Let's check <a href="http://phaedrusdeinus.org/Bayes'_theorem.html">Wikipedia</a>.</p>
- </div>
- <div class='slide'>
- <h1>Derrr....</h1>
- </div>
- <div class='slide'>
- <h1>An example: random drug testing</h1>
- <p>3% of the population are using Zopadrine.</p>
- <p>We have a drug test with a 98% accuracy rate.</p>
- </div>
- <div class='slide'>
- <h1>An example: random drug testing</h1>
- <p>3% of the population are using Zopadrine.</p>
- <p>We have a drug test with a 98% accuracy rate.</p>
- <p>Bob is tested, and the result is positive. How likely is it that Bob uses Zopadrine?</p>
- </div>
- <div class='slide'>
- <h1>Break it down</h1>
- <p>Let's assume a population of 10000 people.</p>
- </div>
- <div class='slide'>
- <h1>Break it down</h1>
- <p>3% are users.</p>
- <table border=1>
- <tr><td></td><td>Population</td></tr>
- <tr><td>Clean</td><td>9700</td></tr>
- <tr><td>Users</td><td>300</td></tr>
- <tr><td>Total</td><td>10000</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Break it down</h1>
- <p>The test is 98% accurate.</p>
- <table border=1>
- <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
- <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
- <tr><td>Users</td><td>300</td><td>6</td><td>294</td></tr>
- <tr><td>Total</td><td>10000</td><td>9512</td><td>488</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Break it down</h1>
- <p>Bob is tested, and the result is positive. How likely is it that Bob uses Zopadrine?</p>
- <table border=1>
- <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
- <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
- <tr><td>Users</td><td>300</td><td>6</td><td bgcolor="#ff6666">294</td></tr>
- <tr><td>Total</td><td>10000</td><td>9512</td><td bgcolor="#ff6666">488</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Break it down</h1>
- <p>294 / 488 = 60.24%</p>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <p><img alt="Bayes' Theorem" src="img/bayes.png" /></p>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <table>
- <tr><td>P = probability</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>A = "is a user"</td></tr>
- <tr><td>B = "tests positive"</td></tr>
- <tr><td>x|y = x, given y</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <table>
- <tr><td>P(A) = probability of being a user</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = probability of testing positive, given being a user</td></tr>
- <tr><td>P(B) = probability of testing positive</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = probability of testing positive, given being a user</td></tr>
- <tr><td>P(B) = probability of testing positive</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = 98%</td></tr>
- <tr><td>P(B) = probability of testing positive</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Back to the numbers</h1>
- <table border=1>
- <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
- <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
- <tr><td>Users</td><td>300</td><td>6</td><td>294</td></tr>
- <tr><td>Total</td><td bgcolor="#ff6666">10000</td><td>9512</td><td bgcolor="#ff6666">488</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = 98%</td></tr>
- <tr><td>P(B) = 4.88%</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>Back to Bayes' Theorem</h1>
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = 98%</td></tr>
- <tr><td>P(B) = 4.88%</td></tr>
- <tr><td>P(A|B) = (98% * 3%)/4.88% = 60.24%</td></tr>
- </table>
- </div>
- <div class='slide'>
- <h1>This works with population numbers, too</h1>
- <pre><code>P(A) = 300
- P(B|A) = 9800
- P(B) = 488
- P(A|B) = 6024
- </code></pre>
- <p>Which is useful for reasons we'll see later.</p>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>My examples are going to be in perl.</p>
- <pre><code>sub bayes {
- my ($p_a, $p_b, $p_b_a) = @_;
- my $p_a_b = ($p_b_a * $p_a) / $p_b;
- return $p_a_b;
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>But you could just as easily work in Python.</p>
- <pre><code>def bayes(p_a, p_b, p_b_a):
- return (p_b_a * p_a) / p_b
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>Or Java</p>
- <pre><code>public static Double bayes(Double p_a, Double p_b, Double p_b_a) {
- Double p_a_b = (p_b_a * p_a) / p_b;
- return p_a_b;
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>Or SML</p>
- <pre><code>let bayes(p_a, p_b, p_b_a) = (p_b_a * p_a) / p_b
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>Or Erlang</p>
- <pre><code>bayes(p_a, p_b, p_b_a) ->
- (p_b_a * p_a) / p_b.
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>Or Haskell</p>
- <pre><code>bayes p_a p_b p_b_a = (p_b_a * p_a) / p_b
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>Or Scheme</p>
- <pre><code>(define (bayes p_a p_b p_b_a)
- (/ (* p_b_a p_a) p_b))
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Bayes' Theorem, in code</h1>
- <p>LOLCODE, anyone? Befunge? Unlambda?</p>
- <p>If it supports floating point operations, you're set.</p>
- </div>
- <div class='slide'>
- <h1>How does that make a classifier?</h1>
- <pre><code>A = "is spam"
- B = "contains the string 'viagra'"
- </code></pre>
- <p>What's P(A|B)?</p>
- </div>
- <div class='slide'>
- <h1>What do we need for a classifier?</h1>
- <ol>
- <li>We need to tokenize our training set</li>
- <li>Then build a model</li>
- <li>Then test that model</li>
- <li>Then apply that model to new data</li>
- </ol>
- </div>
- <div class='slide'>
- <h1>What do we need for a classifier?</h1>
- <ol>
- <li><strong>We need to tokenize our training set</strong></li>
- <li>Then build a model</li>
- <li>Then test that model</li>
- <li>Then apply that model to new data</li>
- </ol>
- </div>
- <div class='slide'>
- <h1>Tokenizing your training set</h1>
- <p><em>Fancy</em> perl</p>
- <pre><code>sub tokenize {
- my $contents = shift;
- my %tokens = map { $_ => 1 } split(/\s+/, $contents);
- return %tokens;
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Tokenizing your training set</h1>
- <pre><code>sub tokenize_file {
- my $filename = shift;
- my $contents = '';
- open(FILE, $filename);
- read(FILE, $contents, -s FILE);
- close(FILE);
- return tokenize($contents);
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Tokenizing your training set</h1>
- <p>This is the "bag of words" model.</p>
- <p>For each category (spam, not spam), we need to know how many documents in the training set contain a given word.</p>
- </div>
- <div class='slide'>
- <h1>Tokenizing your training set</h1>
- <pre><code>my %spam_tokens = ();
- my %notspam_tokens = ();
- foreach my $file (@spam_files) {
- my %tokens = tokenize_file($file);
- %spam_tokens = combine_hash(\%spam_tokens, \%tokens);
- }
- foreach my $file (@notspam_files) {
- my %tokens = tokenize_file($file);
- %notspam_tokens = combine_hash(\%notspam_tokens, \%tokens);
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Tokenizing your training set</h1>
- <pre><code>sub combine_hash {
- my ($hash1, $hash2) = @_;
- my %resulthash = %{ $hash1 };
- foreach my $key (keys(%{ $hash2 })) {
- if ($resulthash{$key}) {
- $resulthash{$key} += $hash2->{$key};
- } else {
- $resulthash{$key} = $hash2->{$key};
- }
- }
- return %resulthash;
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>What do we need for a classifier?</h1>
- <ol>
- <li>We need to tokenize our training set</li>
- <li><strong>Then build a model</strong></li>
- <li>Then test that model</li>
- <li>Then apply that model to new data</li>
- </ol>
- </div>
- <div class='slide'>
- <h1>Build a model</h1>
- <pre><code>my %total_tokens = combine_hash(\%spam_tokens, \%notspam_tokens);
- my $total_spam_files = scalar(@spam_files);
- my $total_notspam_files = scalar(@notspam_files);
- my $total_files = $total_spam_files + $total_notspam_files;
- my $probability_spam = $total_spam_files / $total_files;
- my $probability_notspam = $total_notspam_files / $total_files;
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Build a model</h1>
- <p>In this case, our model is just a bunch of numbers. </p>
- </div>
- <div class='slide'>
- <h1>Build a model</h1>
- <p>In this case, our model is just a bunch of numbers. </p>
- <p>(a little secret: it's <em>all</em> a bunch of numbers)</p>
- </div>
- <div class='slide'>
- <h1>What do we need for a classifier?</h1>
- <ol>
- <li>We need to tokenize our training set</li>
- <li>Then build a model</li>
- <li><strong>Then test that model</strong></li>
- <li>Then apply that model to new data</li>
- </ol>
- </div>
- <div class='slide'>
- <h1>*cough* *cough*</h1>
- </div>
- <div class='slide'>
- <h1>What do we need for a classifier?</h1>
- <ol>
- <li>We need to tokenize our training set</li>
- <li>Then build a model</li>
- <li>Then test that model</li>
- <li><strong>Then apply that model to new data</strong></li>
- </ol>
- </div>
- <div class='slide'>
- <h1>Apply that model to new data</h1>
- <pre><code>my %test_tokens = tokenize_file($test_file);
- foreach my $token (keys(%test_tokens)) {
- if (exists($total_tokens{$token})) {
- my $p_t_s = (($spam_tokens{$token} || 0) + 1) /
- ($total_spam_files + $total_tokens);
- $spam_accumulator = $spam_accumulator * $p_t_s;
- my $p_t_ns = (($notspam_tokens{$token} || 0) + 1) /
- ($total_notspam_files + $total_tokens);
- $notspam_accumulator = $notspam_accumulator * $p_t_ns;
- }
- }
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Apply that model to new data</h1>
- <pre><code>my $score_spam = bayes( $probability_spam,
- $total_tokens,
- $spam_accumulator );
- my $score_notspam = bayes( $probability_notspam,
- $total_tokens,
- $notspam_accumulator );
- my $likelihood_spam = $score_spam / ($score_spam + $score_notspam);
- my $likelihood_notspam = $score_notspam / ($score_spam + $score_notspam);
- printf("likelihood of spam email: %0.2f %%\n", ($likelihood_spam * 100));
- </code></pre>
- </div>
- <div class='slide'>
- <h1>Boom</h1>
- </div>
- <div class='slide'>
- <h1>What sucks?</h1>
- </div>
- <div class='slide'>
- <h1>What sucks?</h1>
- <ul>
- <li>Our tokenization</li>
- </ul>
- </div>
- <div class='slide'>
- <h1>What sucks?</h1>
- <ul>
- <li>Our tokenization</li>
- <li>Our memory limitations</li>
- </ul>
- </div>
- <div class='slide'>
- <h1>What sucks?</h1>
- <ul>
- <li>Our tokenization</li>
- <li>Our memory limitations</li>
- <li>Saving/loading models</li>
- </ul>
- </div>
- <div class='slide'>
- <h1>Improve memory use</h1>
- <h3>Limit the number of tokens</h3>
- <p>We want to use the tokens with the highest information values. That means tokens that are predominantly in one category but not the other.</p>
- </div>
- <div class='slide'>
- <h1>Improve memory use</h1>
- <h3>Limit the number of tokens</h3>
- <p>We want to use the tokens with the highest information values. That means tokens that are predominantly in one category but not the other.</p>
- <p>There are a bunch of ways to calculate this, though the big one is Information Gain.</p>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, simple stuff</h1>
- <ul>
- <li>Weed out punctuation</li>
- <li>Weed out stopwords</li>
- <li>normaLize CASE</li>
- <li>Strip out markup</li>
- </ul>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, advanced stuff</h1>
- <h3>Stemming</h3>
- <p>"wrestling", "wrestler", "wrestled", and "wrestle" are all the same word concept.</p>
- <p>Pros: fewer tokens, related tokens match</p>
- <p>Cons: some words are hard to stem correctly (e.g. "cactus")</p>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, advanced stuff</h1>
- <h3>Include bigrams</h3>
- <p>Bigrams are token pairs. For example, "open source", "ron paul", "twitter addict".</p>
- <p>Pros: we start distinguishing between Star Wars and astronomy wars</p>
- <p>Cons: our memory use balloons</p>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, advanced stuff</h1>
- <h3>Use numbers</h3>
- <p>Instead of binary (word x is in doc y), we store frequencies (word x appears z times in doc y).</p>
- <p>Pros: damage from weak associations is reduced; easier to find the important words in a document</p>
- <p>Cons: the math becomes more complex; in many cases, accuracy doesn't actually increase</p>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, advanced stuff</h1>
- <h3>Use non-token features</h3>
- <p>Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.</p>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, advanced stuff</h1>
- <h3>Use non-token features</h3>
- <p>Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.</p>
- <p>We can also grab structural information, like the sender, or subject line, and treat them differently. Or whether the word appears early or late in the document.</p>
- </div>
- <div class='slide'>
- <h1>Improve tokenization, advanced stuff</h1>
- <h3>Use non-token features</h3>
- <p>Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.</p>
- <p>We can also grab structural information, like the sender, or subject line, and treat them differently. Or whether the word appears early or late in the document.</p>
- <p>Pros: a little can go a long way</p>
- <p>Cons: selecting these can be a dark art. or an incredible memory burden.</p>
- </div>
- <div class='slide'>
- <h1>Which leads us to</h1>
- </div>
- <div class='slide'>
- <h1>Which leads us to</h1>
- <p>Tokenization == Vectorization</p>
- </div>
- <div class='slide'>
- <h1>In other words</h1>
- <p>Our documents are all just vectors of numbers.</p>
- </div>
- <div class='slide'>
- <h1>Or even</h1>
- <p>Our documents are all just points in a high-dimensional Cartesian space.</p>
- </div>
- <div class='slide'>
- <h1>Vectors of numbers</h1>
- <p>This concept opens up a whole world of statistical methods for categorization, including decision trees, linear separations, and support vector machines.</p>
- </div>
- <div class='slide'>
- <h1>Points in space</h1>
- <p>And this opens up a whole different world of geometric methods for categorization and information manipulation, including k-nearest-neighbor classification and various clustering algorithms.</p>
- </div>
- <div class='slide'>
- <h1>Alright</h1>
- <p>It's been a long trip. Any questions?</p>
- </div>
- <div class='slide'>
- <h1>Thanks</h1>
- <p>Thanks for coming. Thanks to OS Bridge for having me.</p>
- </div>
- </body></html>
|