I have read the threads up to now and, despite being ignorant about security research, I would call myself convinced of the usefulness of such a tool in the near-future to shave off time in the tasks required for this kind of work.
My problem with this is that transformer-based LLMs still don’t sound to me like the good tool for the job when it comes to such formal languages. It is surely a very expensive way to do this job.
Other architectures are getting much less attention because of this the focus of investors on this shiny toy. From my understanding, neurosymbolic AI would do a much better and potentially faster job at a task involving stable concepts.
I have read the threads up to now and, despite being ignorant about security research, I would call myself convinced of the usefulness of such a tool in the near-future to shave off time in the tasks required for this kind of work.
My problem with this is that transformer-based LLMs still don’t sound to me like the good tool for the job when it comes to such formal languages. It is surely a very expensive way to do this job.
Other architectures are getting much less attention because of this the focus of investors on this shiny toy. From my understanding, neurosymbolic AI would do a much better and potentially faster job at a task involving stable concepts.