As someone who’s been dealing with a lot of low mood lately, it really brightens my day to hear someone express gratitude for my anonymous, unauthorized trail maintenance.
Text blurred for privacy reasons.
As someone who’s been dealing with a lot of low mood lately, it really brightens my day to hear someone express gratitude for my anonymous, unauthorized trail maintenance.
Text blurred for privacy reasons.
Well, there’s Unredacted which just tries to brute force it - see this blog post. Then there’s DepixHMM which uses Hidden Markov Models and links to the research paper it’s based on.
The example Unredacted used is 4 pixels high. The one in the screenshot seems to only be 2 pixels high.
True, this would make it harder. But… On the other hand, its not a random password but text. If you know (or guess) the language you may be able to employ other tricks like “how common is each letter?”, “which combination of letters is more common in this language?” And so on. Maybe the hidden markov model mentioned in the research paper does that (which would be one thing that Markov Models do IIRC).
Right, technically even one pixel per letter could be solvable. Different letters would mostly result in a slightly different hue. And if multiple letters have the same, it could be guessed via the neighbours and statistical frequency of each letter.
We also have the context and could specifically look for words like trees, path, thank, saw first.
Isn’t this software available? People keep telling me that it could be decensored, but no one has tried it.
I wonder if an entire line gets blurred with one pattern-- therefore if you know what the emojis are supposed to look like it can help figure out the blurring pattern