I feel that I have a strong instinct as a user about what I should expect from various
inputs. My studies in Data Analytics and Data Science have provided a foundation for
concepts underlying AI, such as Machine Learning. I certainly have a great deal more to
learn, and am excited to do so.
One important caveat is that expertise and experience help "judgement" tremendously. I know
what a simple web page should look like, and I know when it's cluttered, script-heavy, and
poorly organized. I knew from the start of this page that I didn't want it cluttered or
over-engineered with any JavaScript. I understand what Python type hinting should look like,
and I can spot deprecated type hinting implementations from Python versions before Python
3.9 and 3.10. I'm aware of
common AI pitfalls to watch for in my area of expertise.
In other domains (perhaps C++, baseball coaching, biological research, or writing in a strict
poem format), I would need help or detailed feedback from an expert. I would also need to
take some time to immerse myself in the topic and understand what's working well with
Claude, what isn't, and how to measure those issues over time.
It's been important from day one to watch for hallucination, capitulation due to sycophancy,
over-engineering, and user-introduced biases. One of my favorite experiments was when my
wife suggested a political question that was important to her to see how AI would respond.
It gave the answer she wanted. I quickly recognized that the phrasing of her question
introduced bias, so I asked the same question with a different framing and got an answer she
didn't like equally quickly. It was interesting trying to formulate a completely
unbiased question, but it was impossible—even the instruction to remove bias created bias.
Clearly, getting unbiased answers to complex human perspective questions is not fully
promptable, nor are the resulting answers conclusive. It's important to remember that many
topics are subjective. In those cases, AI is likely to reflect your values to you. It is an
inherent feature of introspective reflection, and it's always on, making it essential to
recognize when the source of distortion comes from the prompter.
Back to concrete topics: it has been possible to help stop hallucinations on stubborn prompts
by asking the AI
to look for reliable contradictory evidence, challenge its assumptions, back up answers
with specific sources, or identify reasons previous prompts overlooked facts. (Though the
latter sometimes engages "hallucinate an excuse" mode.)
I think my prompts could benefit from more study about thinking fallacies and debate
tactics. Learning to spot fallacies and tactics would help my immediate judgment and give
me more "nameable" tools to use in post-prompts or in metaprompting.
Other issues that I've worked on include overstating compliance ("Here is your technical
documentation: clear and simple with no violence or hate speech."), and writing quality
(over-dramatization of mundane points, heavy use of tropes, lack of vocal variety, not
trusting the reader, etc.)
As an aside, it is fascinating to apply to work in an industry that understands how it
created a
technology, and still does not fully understand how that technology works, or why it works
as
well as it does. I am excited about having a front row seat to the research in this area.