123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518 |
- # Write your own Bayesian Classifier!
- John Melesky
- (Open Source Bridge, June 2009)
- ---
- # What's a Bayesian Classifier?
- ---
- # What's a Bayesian Classifier?
- Something which classifies based on:
- 1. Information about past categorizations
- 2. Bayesian statistics (Bayes' Theorem)
- ---
- # What's Bayes' Theorem?
- Let's check [Wikipedia](http://phaedrusdeinus.org/Bayes'_theorem.html).
- ---
- # Derrr....
- ---
- # An example: random drug testing
- 3% of the population are using Zopadrine.
- We have a drug test with a 98% accuracy rate.
- ---
- # An example: random drug testing
- 3% of the population are using Zopadrine.
- We have a drug test with a 98% accuracy rate.
- Bob is tested, and the result is positive. How likely is it that Bob uses Zopadrine?
- ---
- # Break it down
- Let's assume a population of 10000 people.
- ---
- # Break it down
- 3% are users.
- <table border=1>
- <tr><td></td><td>Population</td></tr>
- <tr><td>Clean</td><td>9700</td></tr>
- <tr><td>Users</td><td>300</td></tr>
- <tr><td>Total</td><td>10000</td></tr>
- </table>
- ---
- # Break it down
- The test is 98% accurate.
- <table border=1>
- <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
- <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
- <tr><td>Users</td><td>300</td><td>6</td><td>294</td></tr>
- <tr><td>Total</td><td>10000</td><td>9512</td><td>488</td></tr>
- </table>
- ---
- # Break it down
- Bob is tested, and the result is positive. How likely is it that Bob uses Zopadrine?
- <table border=1>
- <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
- <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
- <tr><td>Users</td><td>300</td><td>6</td><td bgcolor="#ff6666">294</td></tr>
- <tr><td>Total</td><td>10000</td><td>9512</td><td bgcolor="#ff6666">488</td></tr>
- </table>
- ---
- # Break it down
- 294 / 488 = 60.24%
- ---
- # Back to Bayes' Theorem
- ![Bayes' Theorem](img/bayes.png)
- ---
- # Back to Bayes' Theorem
- <table>
- <tr><td>P = probability</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>A = "is a user"</td></tr>
- <tr><td>B = "tests positive"</td></tr>
- <tr><td>x|y = x, given y</td></tr>
- </table>
- ---
- # Back to Bayes' Theorem
- <table>
- <tr><td>P(A) = probability of being a user</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = probability of testing positive, given being a user</td></tr>
- <tr><td>P(B) = probability of testing positive</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- ---
- # Back to Bayes' Theorem
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = probability of testing positive, given being a user</td></tr>
- <tr><td>P(B) = probability of testing positive</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- ---
- # Back to Bayes' Theorem
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = 98%</td></tr>
- <tr><td>P(B) = probability of testing positive</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- ---
- # Back to the numbers
- <table border=1>
- <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
- <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
- <tr><td>Users</td><td>300</td><td>6</td><td>294</td></tr>
- <tr><td>Total</td><td bgcolor="#ff6666">10000</td><td>9512</td><td bgcolor="#ff6666">488</td></tr>
- </table>
- ---
- # Back to Bayes' Theorem
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = 98%</td></tr>
- <tr><td>P(B) = 4.88%</td></tr>
- <tr><td>P(A|B) = probability Bob's a user</td></tr>
- </table>
- ---
- # Back to Bayes' Theorem
- <table>
- <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
- <tr><td>P(B|A) = 98%</td></tr>
- <tr><td>P(B) = 4.88%</td></tr>
- <tr><td>P(A|B) = (98% * 3%)/4.88% = 60.24%</td></tr>
- </table>
- ---
- # This works with population numbers, too
- P(A) = 300
- P(B|A) = 9800
- P(B) = 488
- P(A|B) = 6024
- Which is useful for reasons we'll see later.
- ---
- # Bayes' Theorem, in code
- My examples are going to be in perl.
- sub bayes {
- my ($p_a, $p_b, $p_b_a) = @_;
- my $p_a_b = ($p_b_a * $p_a) / $p_b;
- return $p_a_b;
- }
- ---
- # Bayes' Theorem, in code
- But you could just as easily work in Python.
- def bayes(p_a, p_b, p_b_a):
- return (p_b_a * p_a) / p_b
- ---
- # Bayes' Theorem, in code
- Or Java
- public static Double bayes(Double p_a, Double p_b, Double p_b_a) {
- Double p_a_b = (p_b_a * p_a) / p_b;
- return p_a_b;
- }
- ---
- # Bayes' Theorem, in code
- Or SML
- let bayes(p_a, p_b, p_b_a) = (p_b_a * p_a) / p_b
- ---
- # Bayes' Theorem, in code
- Or Erlang
- bayes(p_a, p_b, p_b_a) ->
- (p_b_a * p_a) / p_b.
- ---
- # Bayes' Theorem, in code
- Or Haskell
- bayes p_a p_b p_b_a = (p_b_a * p_a) / p_b
- ---
- # Bayes' Theorem, in code
- Or Scheme
- (define (bayes p_a p_b p_b_a)
- (/ (* p_b_a p_a) p_b))
- ---
- # Bayes' Theorem, in code
- LOLCODE, anyone? Befunge? Unlambda?
- If it supports floating point operations, you're set.
- ---
- # How does that make a classifier?
- A = "is spam"
- B = "contains the string 'viagra'"
- What's P(A|B)?
- ---
- # What do we need for a classifier?
- 1. We need to tokenize our training set
- 2. Then build a model
- 3. Then test that model
- 4. Then apply that model to new data
- ---
- # What do we need for a classifier?
- 1. **We need to tokenize our training set**
- 2. Then build a model
- 3. Then test that model
- 4. Then apply that model to new data
- ---
- # Tokenizing your training set
- *Fancy* perl
- sub tokenize {
- my $contents = shift;
- my %tokens = map { $_ => 1 } split(/\s+/, $contents);
- return %tokens;
- }
- ---
- # Tokenizing your training set
- sub tokenize_file {
- my $filename = shift;
-
- my $contents = '';
- open(FILE, $filename);
- read(FILE, $contents, -s FILE);
- close(FILE);
-
- return tokenize($contents);
- }
- ---
- # Tokenizing your training set
- This is the "bag of words" model.
- For each category (spam, not spam), we need to know how many documents in the training set contain a given word.
- ---
- # Tokenizing your training set
- my %spam_tokens = ();
- my %notspam_tokens = ();
-
- foreach my $file (@spam_files) {
- my %tokens = tokenize_file($file);
- %spam_tokens = combine_hash(\%spam_tokens, \%tokens);
- }
-
- foreach my $file (@notspam_files) {
- my %tokens = tokenize_file($file);
- %notspam_tokens = combine_hash(\%notspam_tokens, \%tokens);
- }
- ---
- # Tokenizing your training set
- sub combine_hash {
- my ($hash1, $hash2) = @_;
-
- my %resulthash = %{ $hash1 };
-
- foreach my $key (keys(%{ $hash2 })) {
- if ($resulthash{$key}) {
- $resulthash{$key} += $hash2->{$key};
- } else {
- $resulthash{$key} = $hash2->{$key};
- }
- }
-
- return %resulthash;
- }
- ---
- # What do we need for a classifier?
- 1. We need to tokenize our training set
- 2. **Then build a model**
- 3. Then test that model
- 4. Then apply that model to new data
- ---
- # Build a model
- my %total_tokens = combine_hash(\%spam_tokens, \%notspam_tokens);
-
- my $total_spam_files = scalar(@spam_files);
- my $total_notspam_files = scalar(@notspam_files);
- my $total_files = $total_spam_files + $total_notspam_files;
- my $probability_spam = $total_spam_files / $total_files;
- my $probability_notspam = $total_notspam_files / $total_files;
- ---
- # Build a model
- In this case, our model is just a bunch of numbers.
- ---
- # Build a model
- In this case, our model is just a bunch of numbers.
- (a little secret: it's *all* a bunch of numbers)
- ---
- # What do we need for a classifier?
- 1. We need to tokenize our training set
- 2. Then build a model
- 3. **Then test that model**
- 4. Then apply that model to new data
- ---
- # \*cough\* \*cough\*
- ---
- # What do we need for a classifier?
- 1. We need to tokenize our training set
- 2. Then build a model
- 3. Then test that model
- 4. **Then apply that model to new data**
- ---
- # Apply that model to new data
- my %test_tokens = tokenize_file($test_file);
- foreach my $token (keys(%test_tokens)) {
- if (exists($total_tokens{$token})) {
- my $p_t_s = (($spam_tokens{$token} || 0) + 1) /
- ($total_spam_files + $total_tokens);
- $spam_accumulator = $spam_accumulator * $p_t_s;
- my $p_t_ns = (($notspam_tokens{$token} || 0) + 1) /
- ($total_notspam_files + $total_tokens);
- $notspam_accumulator = $notspam_accumulator * $p_t_ns;
- }
- }
- ---
- # Apply that model to new data
- my $score_spam = bayes( $probability_spam,
- $total_tokens,
- $spam_accumulator );
-
- my $score_notspam = bayes( $probability_notspam,
- $total_tokens,
- $notspam_accumulator );
-
- my $likelihood_spam = $score_spam / ($score_spam + $score_notspam);
- my $likelihood_notspam = $score_notspam / ($score_spam + $score_notspam);
-
- printf("likelihood of spam email: %0.2f %%\n", ($likelihood_spam * 100));
- ---
- # Boom
- ---
- # What sucks?
- ---
- # What sucks?
- - Our tokenization
- ---
- # What sucks?
- - Our tokenization
- - Our memory limitations
- ---
- # What sucks?
- - Our tokenization
- - Our memory limitations
- - Saving/loading models
- ---
- # Improve memory use
- ### Limit the number of tokens
- We want to use the tokens with the highest information values. That means tokens that are predominantly in one category but not the other.
- ---
- # Improve memory use
- ### Limit the number of tokens
- We want to use the tokens with the highest information values. That means tokens that are predominantly in one category but not the other.
- There are a bunch of ways to calculate this, though the big one is Information Gain.
- ---
- # Improve tokenization, simple stuff
- - Weed out punctuation
- - Weed out stopwords
- - normaLize CASE
- - Strip out markup
- ---
- # Improve tokenization, advanced stuff
- ### Stemming
- "wrestling", "wrestler", "wrestled", and "wrestle" are all the same word concept.
- Pros: fewer tokens, related tokens match
- Cons: some words are hard to stem correctly (e.g. "cactus")
- ---
- # Improve tokenization, advanced stuff
- ### Include bigrams
- Bigrams are token pairs. For example, "open source", "ron paul", "twitter addict".
- Pros: we start distinguishing between Star Wars and astronomy wars
- Cons: our memory use balloons
- ---
- # Improve tokenization, advanced stuff
- ### Use numbers
- Instead of binary (word x is in doc y), we store frequencies (word x appears z times in doc y).
- Pros: damage from weak associations is reduced; easier to find the important words in a document
- Cons: the math becomes more complex; in many cases, accuracy doesn't actually increase
- ---
- # Improve tokenization, advanced stuff
- ### Use non-token features
- Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.
- ---
- # Improve tokenization, advanced stuff
- ### Use non-token features
- Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.
- We can also grab structural information, like the sender, or subject line, and treat them differently. Or whether the word appears early or late in the document.
- ---
- # Improve tokenization, advanced stuff
- ### Use non-token features
- Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.
- We can also grab structural information, like the sender, or subject line, and treat them differently. Or whether the word appears early or late in the document.
- Pros: a little can go a long way
- Cons: selecting these can be a dark art. or an incredible memory burden.
- ---
- # Which leads us to
- ---
- # Which leads us to
- Tokenization == Vectorization
- ---
- # In other words
- Our documents are all just vectors of numbers.
- ---
- # Or even
- Our documents are all just points in a high-dimensional Cartesian space.
- ---
- # Vectors of numbers
- This concept opens up a whole world of statistical methods for categorization, including decision trees, linear separations, and support vector machines.
- ---
- # Points in space
- And this opens up a whole different world of geometric methods for categorization and information manipulation, including k-nearest-neighbor classification and various clustering algorithms.
- ---
- # Alright
- It's been a long trip. Any questions?
- ---
- # Thanks
- Thanks for coming. Thanks to OS Bridge for having me.
|