Perfect timing. I’m about to build an AI org/roles/workflow with multiple agents. This is good advice. I think like any good system, the agents need self-correction mechanisms. I’m planning on training mine using the Socratic method - having them ask questions and then assessing the answers together. That may prevent them from being starved and may reveal gaps. Have you tried that? Certainly it will take more time, but my hope is the investment in teaching yields better outputs.
Interesting - let's talk more about your use case and what your final output is intended to look like. But yes, when I am using AI, especially for a larger-scale project, I ask it to fill in the gaps and review my logic. 1) Ask me any clarifying questions before finding a solution 2) Weigh the pros and cons of each solution and based on my requirements, which method do you recommend (sometimes the AI has a hard time picking a choice so I try to force it to) 3) I always try to create a project file and instructions for larger projects so I can keep track of it.
You can have AI update it continuously so it's always recent. When I need to come back to it, it has the most up-to-date details.
I have also experimented with "QA agents" to review requirements and ensure the output meets them, saving me time doing it manually. For example, for ad creation, it will do a check of character counts, any messaging guidelines, etc. as a last step to refine the results as well.
Perfect timing. I’m about to build an AI org/roles/workflow with multiple agents. This is good advice. I think like any good system, the agents need self-correction mechanisms. I’m planning on training mine using the Socratic method - having them ask questions and then assessing the answers together. That may prevent them from being starved and may reveal gaps. Have you tried that? Certainly it will take more time, but my hope is the investment in teaching yields better outputs.
Interesting - let's talk more about your use case and what your final output is intended to look like. But yes, when I am using AI, especially for a larger-scale project, I ask it to fill in the gaps and review my logic. 1) Ask me any clarifying questions before finding a solution 2) Weigh the pros and cons of each solution and based on my requirements, which method do you recommend (sometimes the AI has a hard time picking a choice so I try to force it to) 3) I always try to create a project file and instructions for larger projects so I can keep track of it.
You can have AI update it continuously so it's always recent. When I need to come back to it, it has the most up-to-date details.
I have also experimented with "QA agents" to review requirements and ensure the output meets them, saving me time doing it manually. For example, for ad creation, it will do a check of character counts, any messaging guidelines, etc. as a last step to refine the results as well.