I'm all about leaving politics out of Ai, but this comparison between DeepSeek and another popular chatbot like Claude is extremely interesting.
β
When asked about Israel and Gaza, DeepSeek (Chinese owner) spends 10 seconds reasoning on how to "carefully" approach this "sensitive and complex issue".
It delivers a long and thorough answer, outlining both the Israel and Palestinian's perspective, the broader context and international concerns.
β
Claude (American owner) gives a factually correct, although short answer.
β
When asked about Tiananmen Square, a sensitive event in Chinese history, DeepSeek prefers not to answer, inviting me to discuss "math, coding, and logic problems, instead."
Fair enough, I'd also rather discuss those topics π .
β
On the other hand, this time Claude gives a more comprehensive answer.
β
Couple of takeaways:
β
-> It's amazing to watch DeepSeek R1 model reasoning in real time before delivering a final answer.
That's exactly what humans should do! (and often don't π )
β
-> Biases are inevitable.
This is true for humans and therefore also for machines.
Plus, even non-Ai tech products we use regularly, including the one I'm posting on right now, have biases, naturally embedded in their algorithms.
I think this is not a good reason not to use a tech product.
However, it is a good reason to learn how to use it more consciously.