When I started to do accessibility testing, I would rely too heavily on HTML validators to verify if a site is compliant. I would use the web developer toolbar, W3C validator, and would do some basic testing with a screen reader, not fully understanding the complexity of the needs of the end users.
As I gained more knowledge of the end users and the constraints of technology, I had to take a step back from reliance on the tools due to the many false negatives, and test the errors manually to ensure I reported actual bugs. For instance, the W3C validator would claim a variable had been duplicated, but one “duplicate” would be in a comment describing the variable — not a bug.
I have also found that what is marked as a semantic error is not an issue for assistive technology, but may be an issue if semantically correct (like an aria-label and a title for an href will have both read out). There are also some HTML validators, like the WAVE tool that will highlight errors that other validators will not highlight – so which one is correct?
Some other tools, like color ratio analyzers, will not highlight valid bugs, or mark a failure that is not actually incorrect. For instance, they will sometimes mark two colors as breaking the 1.4.3 checkpoint, but when you check the colors used by inspecting the elements, you find they do not actually break the checkpoint. One such example is the Juicy Studio tool — the Firefox add-on tends to have the result slightly out from the actual ratio compared to their website analyzer. Often, I will note the highlighted error, and then validate the actual colors used on the website to see if a failure is correct (which is also mentioned in a review for the tool).