index.html 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665
  1. <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
  2. <html> <head>
  3. <title>Write your own Bayesian Classifier!</title>
  4. <style>
  5. .slide {
  6. border: 2px solid #888833;
  7. background-color: #AABBAA;
  8. padding: 2%;
  9. width: 94%;
  10. }
  11. pre {
  12. border: 1px solid #444422;
  13. background-color: #BBCCBB;
  14. padding: 2px;
  15. }
  16. </style>
  17. <script src="scripts/jquery-1.3.1.min.js" type="text/javascript"></script>
  18. <script src="scripts/slideshow.js" type="text/javascript"></script>
  19. </head>
  20. <body>
  21. <div class='slide'>
  22. <h1>Write your own Bayesian Classifier!</h1>
  23. <p>John Melesky
  24. (Open Source Bridge, June 2009)</p>
  25. </div>
  26. <div class='slide'>
  27. <h1>What's a Bayesian Classifier?</h1>
  28. </div>
  29. <div class='slide'>
  30. <h1>What's a Bayesian Classifier?</h1>
  31. <p>Something which classifies based on:</p>
  32. <ol>
  33. <li>Information about past categorizations</li>
  34. <li>Bayesian statistics (Bayes' Theorem)</li>
  35. </ol>
  36. </div>
  37. <div class='slide'>
  38. <h1>What's Bayes' Theorem?</h1>
  39. <p>Let's check <a href="http://phaedrusdeinus.org/Bayes&apos;_theorem.html">Wikipedia</a>.</p>
  40. </div>
  41. <div class='slide'>
  42. <h1>Derrr....</h1>
  43. </div>
  44. <div class='slide'>
  45. <h1>An example: random drug testing</h1>
  46. <p>3% of the population are using Zopadrine.</p>
  47. <p>We have a drug test with a 98% accuracy rate.</p>
  48. </div>
  49. <div class='slide'>
  50. <h1>An example: random drug testing</h1>
  51. <p>3% of the population are using Zopadrine.</p>
  52. <p>We have a drug test with a 98% accuracy rate.</p>
  53. <p>Bob is tested, and the result is positive. How likely is it that Bob uses Zopadrine?</p>
  54. </div>
  55. <div class='slide'>
  56. <h1>Break it down</h1>
  57. <p>Let's assume a population of 10000 people.</p>
  58. </div>
  59. <div class='slide'>
  60. <h1>Break it down</h1>
  61. <p>3% are users.</p>
  62. <table border=1>
  63. <tr><td></td><td>Population</td></tr>
  64. <tr><td>Clean</td><td>9700</td></tr>
  65. <tr><td>Users</td><td>300</td></tr>
  66. <tr><td>Total</td><td>10000</td></tr>
  67. </table>
  68. </div>
  69. <div class='slide'>
  70. <h1>Break it down</h1>
  71. <p>The test is 98% accurate.</p>
  72. <table border=1>
  73. <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
  74. <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
  75. <tr><td>Users</td><td>300</td><td>6</td><td>294</td></tr>
  76. <tr><td>Total</td><td>10000</td><td>9512</td><td>488</td></tr>
  77. </table>
  78. </div>
  79. <div class='slide'>
  80. <h1>Break it down</h1>
  81. <p>Bob is tested, and the result is positive. How likely is it that Bob uses Zopadrine?</p>
  82. <table border=1>
  83. <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
  84. <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
  85. <tr><td>Users</td><td>300</td><td>6</td><td bgcolor="#ff6666">294</td></tr>
  86. <tr><td>Total</td><td>10000</td><td>9512</td><td bgcolor="#ff6666">488</td></tr>
  87. </table>
  88. </div>
  89. <div class='slide'>
  90. <h1>Break it down</h1>
  91. <p>294 / 488 = 60.24%</p>
  92. </div>
  93. <div class='slide'>
  94. <h1>Back to Bayes' Theorem</h1>
  95. <p><img alt="Bayes&apos; Theorem" src="img/bayes.png" /></p>
  96. </div>
  97. <div class='slide'>
  98. <h1>Back to Bayes' Theorem</h1>
  99. <table>
  100. <tr><td>P = probability</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
  101. <tr><td>A = "is a user"</td></tr>
  102. <tr><td>B = "tests positive"</td></tr>
  103. <tr><td>x|y = x, given y</td></tr>
  104. </table>
  105. </div>
  106. <div class='slide'>
  107. <h1>Back to Bayes' Theorem</h1>
  108. <table>
  109. <tr><td>P(A) = probability of being a user</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
  110. <tr><td>P(B|A) = probability of testing positive, given being a user</td></tr>
  111. <tr><td>P(B) = probability of testing positive</td></tr>
  112. <tr><td>P(A|B) = probability Bob's a user</td></tr>
  113. </table>
  114. </div>
  115. <div class='slide'>
  116. <h1>Back to Bayes' Theorem</h1>
  117. <table>
  118. <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
  119. <tr><td>P(B|A) = probability of testing positive, given being a user</td></tr>
  120. <tr><td>P(B) = probability of testing positive</td></tr>
  121. <tr><td>P(A|B) = probability Bob's a user</td></tr>
  122. </table>
  123. </div>
  124. <div class='slide'>
  125. <h1>Back to Bayes' Theorem</h1>
  126. <table>
  127. <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
  128. <tr><td>P(B|A) = 98%</td></tr>
  129. <tr><td>P(B) = probability of testing positive</td></tr>
  130. <tr><td>P(A|B) = probability Bob's a user</td></tr>
  131. </table>
  132. </div>
  133. <div class='slide'>
  134. <h1>Back to the numbers</h1>
  135. <table border=1>
  136. <tr><td></td><td>Population</td><td>Test negative</td><td>Test positive</td></tr>
  137. <tr><td>Clean</td><td>9700</td><td>9506</td><td>194</td></tr>
  138. <tr><td>Users</td><td>300</td><td>6</td><td>294</td></tr>
  139. <tr><td>Total</td><td bgcolor="#ff6666">10000</td><td>9512</td><td bgcolor="#ff6666">488</td></tr>
  140. </table>
  141. </div>
  142. <div class='slide'>
  143. <h1>Back to Bayes' Theorem</h1>
  144. <table>
  145. <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
  146. <tr><td>P(B|A) = 98%</td></tr>
  147. <tr><td>P(B) = 4.88%</td></tr>
  148. <tr><td>P(A|B) = probability Bob's a user</td></tr>
  149. </table>
  150. </div>
  151. <div class='slide'>
  152. <h1>Back to Bayes' Theorem</h1>
  153. <table>
  154. <tr><td>P(A) = 3%</td><td rowspan="4"><img src="img/bayes.png"></td></tr>
  155. <tr><td>P(B|A) = 98%</td></tr>
  156. <tr><td>P(B) = 4.88%</td></tr>
  157. <tr><td>P(A|B) = (98% * 3%)/4.88% = 60.24%</td></tr>
  158. </table>
  159. </div>
  160. <div class='slide'>
  161. <h1>This works with population numbers, too</h1>
  162. <pre><code>P(A) = 300
  163. P(B|A) = 9800
  164. P(B) = 488
  165. P(A|B) = 6024
  166. </code></pre>
  167. <p>Which is useful for reasons we'll see later.</p>
  168. </div>
  169. <div class='slide'>
  170. <h1>Bayes' Theorem, in code</h1>
  171. <p>My examples are going to be in perl.</p>
  172. <pre><code>sub bayes {
  173. my ($p_a, $p_b, $p_b_a) = @_;
  174. my $p_a_b = ($p_b_a * $p_a) / $p_b;
  175. return $p_a_b;
  176. }
  177. </code></pre>
  178. </div>
  179. <div class='slide'>
  180. <h1>Bayes' Theorem, in code</h1>
  181. <p>But you could just as easily work in Python.</p>
  182. <pre><code>def bayes(p_a, p_b, p_b_a):
  183. return (p_b_a * p_a) / p_b
  184. </code></pre>
  185. </div>
  186. <div class='slide'>
  187. <h1>Bayes' Theorem, in code</h1>
  188. <p>Or Java</p>
  189. <pre><code>public static Double bayes(Double p_a, Double p_b, Double p_b_a) {
  190. Double p_a_b = (p_b_a * p_a) / p_b;
  191. return p_a_b;
  192. }
  193. </code></pre>
  194. </div>
  195. <div class='slide'>
  196. <h1>Bayes' Theorem, in code</h1>
  197. <p>Or SML</p>
  198. <pre><code>let bayes(p_a, p_b, p_b_a) = (p_b_a * p_a) / p_b
  199. </code></pre>
  200. </div>
  201. <div class='slide'>
  202. <h1>Bayes' Theorem, in code</h1>
  203. <p>Or Erlang</p>
  204. <pre><code>bayes(p_a, p_b, p_b_a) -&gt;
  205. (p_b_a * p_a) / p_b.
  206. </code></pre>
  207. </div>
  208. <div class='slide'>
  209. <h1>Bayes' Theorem, in code</h1>
  210. <p>Or Haskell</p>
  211. <pre><code>bayes p_a p_b p_b_a = (p_b_a * p_a) / p_b
  212. </code></pre>
  213. </div>
  214. <div class='slide'>
  215. <h1>Bayes' Theorem, in code</h1>
  216. <p>Or Scheme</p>
  217. <pre><code>(define (bayes p_a p_b p_b_a)
  218. (/ (* p_b_a p_a) p_b))
  219. </code></pre>
  220. </div>
  221. <div class='slide'>
  222. <h1>Bayes' Theorem, in code</h1>
  223. <p>LOLCODE, anyone? Befunge? Unlambda?</p>
  224. <p>If it supports floating point operations, you're set.</p>
  225. </div>
  226. <div class='slide'>
  227. <h1>How does that make a classifier?</h1>
  228. <pre><code>A = "is spam"
  229. B = "contains the string 'viagra'"
  230. </code></pre>
  231. <p>What's P(A|B)?</p>
  232. </div>
  233. <div class='slide'>
  234. <h1>What do we need for a classifier?</h1>
  235. <ol>
  236. <li>We need to tokenize our training set</li>
  237. <li>Then build a model</li>
  238. <li>Then test that model</li>
  239. <li>Then apply that model to new data</li>
  240. </ol>
  241. </div>
  242. <div class='slide'>
  243. <h1>What do we need for a classifier?</h1>
  244. <ol>
  245. <li><strong>We need to tokenize our training set</strong></li>
  246. <li>Then build a model</li>
  247. <li>Then test that model</li>
  248. <li>Then apply that model to new data</li>
  249. </ol>
  250. </div>
  251. <div class='slide'>
  252. <h1>Tokenizing your training set</h1>
  253. <p><em>Fancy</em> perl</p>
  254. <pre><code>sub tokenize {
  255. my $contents = shift;
  256. my %tokens = map { $_ =&gt; 1 } split(/\s+/, $contents);
  257. return %tokens;
  258. }
  259. </code></pre>
  260. </div>
  261. <div class='slide'>
  262. <h1>Tokenizing your training set</h1>
  263. <pre><code>sub tokenize_file {
  264. my $filename = shift;
  265. my $contents = '';
  266. open(FILE, $filename);
  267. read(FILE, $contents, -s FILE);
  268. close(FILE);
  269. return tokenize($contents);
  270. }
  271. </code></pre>
  272. </div>
  273. <div class='slide'>
  274. <h1>Tokenizing your training set</h1>
  275. <p>This is the "bag of words" model.</p>
  276. <p>For each category (spam, not spam), we need to know how many documents in the training set contain a given word.</p>
  277. </div>
  278. <div class='slide'>
  279. <h1>Tokenizing your training set</h1>
  280. <pre><code>my %spam_tokens = ();
  281. my %notspam_tokens = ();
  282. foreach my $file (@spam_files) {
  283. my %tokens = tokenize_file($file);
  284. %spam_tokens = combine_hash(\%spam_tokens, \%tokens);
  285. }
  286. foreach my $file (@notspam_files) {
  287. my %tokens = tokenize_file($file);
  288. %notspam_tokens = combine_hash(\%notspam_tokens, \%tokens);
  289. }
  290. </code></pre>
  291. </div>
  292. <div class='slide'>
  293. <h1>Tokenizing your training set</h1>
  294. <pre><code>sub combine_hash {
  295. my ($hash1, $hash2) = @_;
  296. my %resulthash = %{ $hash1 };
  297. foreach my $key (keys(%{ $hash2 })) {
  298. if ($resulthash{$key}) {
  299. $resulthash{$key} += $hash2-&gt;{$key};
  300. } else {
  301. $resulthash{$key} = $hash2-&gt;{$key};
  302. }
  303. }
  304. return %resulthash;
  305. }
  306. </code></pre>
  307. </div>
  308. <div class='slide'>
  309. <h1>What do we need for a classifier?</h1>
  310. <ol>
  311. <li>We need to tokenize our training set</li>
  312. <li><strong>Then build a model</strong></li>
  313. <li>Then test that model</li>
  314. <li>Then apply that model to new data</li>
  315. </ol>
  316. </div>
  317. <div class='slide'>
  318. <h1>Build a model</h1>
  319. <pre><code>my %total_tokens = combine_hash(\%spam_tokens, \%notspam_tokens);
  320. my $total_spam_files = scalar(@spam_files);
  321. my $total_notspam_files = scalar(@notspam_files);
  322. my $total_files = $total_spam_files + $total_notspam_files;
  323. my $probability_spam = $total_spam_files / $total_files;
  324. my $probability_notspam = $total_notspam_files / $total_files;
  325. </code></pre>
  326. </div>
  327. <div class='slide'>
  328. <h1>Build a model</h1>
  329. <p>In this case, our model is just a bunch of numbers. </p>
  330. </div>
  331. <div class='slide'>
  332. <h1>Build a model</h1>
  333. <p>In this case, our model is just a bunch of numbers. </p>
  334. <p>(a little secret: it's <em>all</em> a bunch of numbers)</p>
  335. </div>
  336. <div class='slide'>
  337. <h1>What do we need for a classifier?</h1>
  338. <ol>
  339. <li>We need to tokenize our training set</li>
  340. <li>Then build a model</li>
  341. <li><strong>Then test that model</strong></li>
  342. <li>Then apply that model to new data</li>
  343. </ol>
  344. </div>
  345. <div class='slide'>
  346. <h1>*cough* *cough*</h1>
  347. </div>
  348. <div class='slide'>
  349. <h1>What do we need for a classifier?</h1>
  350. <ol>
  351. <li>We need to tokenize our training set</li>
  352. <li>Then build a model</li>
  353. <li>Then test that model</li>
  354. <li><strong>Then apply that model to new data</strong></li>
  355. </ol>
  356. </div>
  357. <div class='slide'>
  358. <h1>Apply that model to new data</h1>
  359. <pre><code>my %test_tokens = tokenize_file($test_file);
  360. foreach my $token (keys(%test_tokens)) {
  361. if (exists($total_tokens{$token})) {
  362. my $p_t_s = (($spam_tokens{$token} || 0) + 1) /
  363. ($total_spam_files + $total_tokens);
  364. $spam_accumulator = $spam_accumulator * $p_t_s;
  365. my $p_t_ns = (($notspam_tokens{$token} || 0) + 1) /
  366. ($total_notspam_files + $total_tokens);
  367. $notspam_accumulator = $notspam_accumulator * $p_t_ns;
  368. }
  369. }
  370. </code></pre>
  371. </div>
  372. <div class='slide'>
  373. <h1>Apply that model to new data</h1>
  374. <pre><code>my $score_spam = bayes( $probability_spam,
  375. $total_tokens,
  376. $spam_accumulator );
  377. my $score_notspam = bayes( $probability_notspam,
  378. $total_tokens,
  379. $notspam_accumulator );
  380. my $likelihood_spam = $score_spam / ($score_spam + $score_notspam);
  381. my $likelihood_notspam = $score_notspam / ($score_spam + $score_notspam);
  382. printf("likelihood of spam email: %0.2f %%\n", ($likelihood_spam * 100));
  383. </code></pre>
  384. </div>
  385. <div class='slide'>
  386. <h1>Boom</h1>
  387. </div>
  388. <div class='slide'>
  389. <h1>What sucks?</h1>
  390. </div>
  391. <div class='slide'>
  392. <h1>What sucks?</h1>
  393. <ul>
  394. <li>Our tokenization</li>
  395. </ul>
  396. </div>
  397. <div class='slide'>
  398. <h1>What sucks?</h1>
  399. <ul>
  400. <li>Our tokenization</li>
  401. <li>Our memory limitations</li>
  402. </ul>
  403. </div>
  404. <div class='slide'>
  405. <h1>What sucks?</h1>
  406. <ul>
  407. <li>Our tokenization</li>
  408. <li>Our memory limitations</li>
  409. <li>Saving/loading models</li>
  410. </ul>
  411. </div>
  412. <div class='slide'>
  413. <h1>Improve memory use</h1>
  414. <h3>Limit the number of tokens</h3>
  415. <p>We want to use the tokens with the highest information values. That means tokens that are predominantly in one category but not the other.</p>
  416. </div>
  417. <div class='slide'>
  418. <h1>Improve memory use</h1>
  419. <h3>Limit the number of tokens</h3>
  420. <p>We want to use the tokens with the highest information values. That means tokens that are predominantly in one category but not the other.</p>
  421. <p>There are a bunch of ways to calculate this, though the big one is Information Gain.</p>
  422. </div>
  423. <div class='slide'>
  424. <h1>Improve tokenization, simple stuff</h1>
  425. <ul>
  426. <li>Weed out punctuation</li>
  427. <li>Weed out stopwords</li>
  428. <li>normaLize CASE</li>
  429. <li>Strip out markup</li>
  430. </ul>
  431. </div>
  432. <div class='slide'>
  433. <h1>Improve tokenization, advanced stuff</h1>
  434. <h3>Stemming</h3>
  435. <p>"wrestling", "wrestler", "wrestled", and "wrestle" are all the same word concept.</p>
  436. <p>Pros: fewer tokens, related tokens match</p>
  437. <p>Cons: some words are hard to stem correctly (e.g. "cactus")</p>
  438. </div>
  439. <div class='slide'>
  440. <h1>Improve tokenization, advanced stuff</h1>
  441. <h3>Include bigrams</h3>
  442. <p>Bigrams are token pairs. For example, "open source", "ron paul", "twitter addict".</p>
  443. <p>Pros: we start distinguishing between Star Wars and astronomy wars</p>
  444. <p>Cons: our memory use balloons</p>
  445. </div>
  446. <div class='slide'>
  447. <h1>Improve tokenization, advanced stuff</h1>
  448. <h3>Use numbers</h3>
  449. <p>Instead of binary (word x is in doc y), we store frequencies (word x appears z times in doc y).</p>
  450. <p>Pros: damage from weak associations is reduced; easier to find the important words in a document</p>
  451. <p>Cons: the math becomes more complex; in many cases, accuracy doesn't actually increase</p>
  452. </div>
  453. <div class='slide'>
  454. <h1>Improve tokenization, advanced stuff</h1>
  455. <h3>Use non-token features</h3>
  456. <p>Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.</p>
  457. </div>
  458. <div class='slide'>
  459. <h1>Improve tokenization, advanced stuff</h1>
  460. <h3>Use non-token features</h3>
  461. <p>Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.</p>
  462. <p>We can also grab structural information, like the sender, or subject line, and treat them differently. Or whether the word appears early or late in the document.</p>
  463. </div>
  464. <div class='slide'>
  465. <h1>Improve tokenization, advanced stuff</h1>
  466. <h3>Use non-token features</h3>
  467. <p>Sometimes we want to use non-textual attributes of documents. For example, length of document, percent of capital letters.</p>
  468. <p>We can also grab structural information, like the sender, or subject line, and treat them differently. Or whether the word appears early or late in the document.</p>
  469. <p>Pros: a little can go a long way</p>
  470. <p>Cons: selecting these can be a dark art. or an incredible memory burden.</p>
  471. </div>
  472. <div class='slide'>
  473. <h1>Which leads us to</h1>
  474. </div>
  475. <div class='slide'>
  476. <h1>Which leads us to</h1>
  477. <p>Tokenization == Vectorization</p>
  478. </div>
  479. <div class='slide'>
  480. <h1>In other words</h1>
  481. <p>Our documents are all just vectors of numbers.</p>
  482. </div>
  483. <div class='slide'>
  484. <h1>Or even</h1>
  485. <p>Our documents are all just points in a high-dimensional Cartesian space.</p>
  486. </div>
  487. <div class='slide'>
  488. <h1>Vectors of numbers</h1>
  489. <p>This concept opens up a whole world of statistical methods for categorization, including decision trees, linear separations, and support vector machines.</p>
  490. </div>
  491. <div class='slide'>
  492. <h1>Points in space</h1>
  493. <p>And this opens up a whole different world of geometric methods for categorization and information manipulation, including k-nearest-neighbor classification and various clustering algorithms.</p>
  494. </div>
  495. <div class='slide'>
  496. <h1>Alright</h1>
  497. <p>It's been a long trip. Any questions?</p>
  498. </div>
  499. <div class='slide'>
  500. <h1>Thanks</h1>
  501. <p>Thanks for coming. Thanks to OS Bridge for having me.</p>
  502. </div>
  503. </body></html>