Goodhart’s Law states (roughly) that once a measure becomes a target, it ceases to be a good measure.

In other words, folks will game the metrics. But this makes an assumption that the metric becoming a target is the only intervention on the incentives of the measured parties.

The practical takeaway from this canonical form of Goodhart’s Law is usually something like:

Don't trust measures that are also targets

or

Don't make a measure a target in the first place

But this conflicts with another adage: “What gets measured gets managed” or “What gets measured gets done”. With 30 seconds of internet searching, one would think this quote comes from Peter Drucker. But with 30 more seconds of internet searching, it is apparently misattributed to Drucker, and came from a 1956 paper by Ridgway, that apparently had a similar flavor to Goodhart’s Law (warning against too much quantitation). Who knows what 30 more seconds of internet searching would turn up.

In any case, these interpretations of Goodhart’s Law conflict with the common sense that some measurement of things you care about is probably a good idea.

I think a more complete version of Goodhart’s law would state something like:

Once a measure becomes a target, it ceases to be a good measure if the measured parties are not sufficiently incentivized to report honestly.

In other words, once you make a measure a target, you introduce (usually) an incentive for measured parties to be dishonest or skewed in the way they report on the measure. In general it should be possible to compensate for this with incentives in the other direction.

Another rephrasing would be something like:

Honest reporting on measures that are also targets has a cost.

But that doesn’t quite roll off the tongue.

What about:

When you make a measure a target, don't forget to incentivize honest reporting.

I kind of like this one. It’s practical at least, and more constructive than the folk interpretation of Goodhart’s Law.