Interesting! I also noticed that search engines give proper results because those are trained differently and using user search and clicks.
I think these popular models could give proper answer but their safety tolerance is too tight that if the AI considers the input even slightly harmful it refuses to answer.
I tried it with phind out of curiosity (programming model) and it answered perfectly https://www.phind.com/search?cache=f8lbjt4x6jwct9mfsw6n3j9v
Interesting! I also noticed that search engines give proper results because those are trained differently and using user search and clicks. I think these popular models could give proper answer but their safety tolerance is too tight that if the AI considers the input even slightly harmful it refuses to answer.