You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm sorry to bring up substrings again, but it's still an issue. For example, try the word "f__kwit" or "a__f__k" (censoring mine). So it's not really a matter of finding it in a string like "oh s**t" but within words itself.
The text was updated successfully, but these errors were encountered:
It's not hard to implement naively, it just means you go from having O(1) lookups on the dictionary hash to having O(d) (size of dictionary) lookups per word. By naive, I mean for each word in the input text, you check every word in the dictionary to see if the dictionary word is a substring of that word. So your total runtime goes from O(n) to O(nd).
The problem is your false positive rate will go through the roof, probably unacceptably so. For example, assemble would be caught by naive substring matching. Alternatively you can do something like
check if word is in dictionary (O(1))
if not, attempt to split into multiple words and check each one for profanity (O(2^m) where m is the length of the word if implemented naively.. can get it down to O(m^2) using memoization, maybe there are even better ways out there)
This would basically increase the runtime to O(nm^2) (where m would be the average length of a word). So maybe a factor of 25 worse than current?
It was bound to happen. Damn you classical computer science! I'd have to look that stuff up if I wanted to know the specifics... I'll keep your comment in mind when I tackle this... unless someone beats me to it... ;)
I'm sorry to bring up substrings again, but it's still an issue. For example, try the word "f__kwit" or "a__f__k" (censoring mine). So it's not really a matter of finding it in a string like "oh s**t" but within words itself.
The text was updated successfully, but these errors were encountered: