Some solutions are moving toward bundling user input (prompts) into buttons and menu items, like "summarize this." An understandable response to the UX challenges, but I wonder if it's running counter to the newfound freedom of expression the plain ol' text input box seems to have generated.
I'm seeing this behavior as well, something I'm calling "Prompt Templating" (because that's essentially what's happening). I'll be covering that this week, actually.
I also have an opinion that not all "freeform" input fields are the answer to LLM features. imo, the vast majority of humans still suck at asking good questions and providing them with a fast path to higher quality questions/prompts is optimal UX.
How long that remains the case will be the question, though I'm not betting against point-and-click interfaces anytime soon.
I'm not sure what a pattern like that would look like, but I could see the use of progressive disclosure making use of both patterns contextually, somewhat like the way Jasper's view-switching works, but a little bit better, where prompt building workflows are only triggered on user request in more intelligent ways (Microsoft Copilot seems to be doing something similar to this).
I consider Microsoft PowerPoint Copilot to be somewhat of a hybrid. In the demo you can see the user instructing the copilot to make changes through a chat. But also clicks on the document at the beginning to keep the initial ai generated deck.
Nice!
I'd be super keen on trying it out in some of the projects soon.
Would love to see what you come up with!
Some solutions are moving toward bundling user input (prompts) into buttons and menu items, like "summarize this." An understandable response to the UX challenges, but I wonder if it's running counter to the newfound freedom of expression the plain ol' text input box seems to have generated.
Language is the UI, and prompts are the new apps.
I'm seeing this behavior as well, something I'm calling "Prompt Templating" (because that's essentially what's happening). I'll be covering that this week, actually.
I also have an opinion that not all "freeform" input fields are the answer to LLM features. imo, the vast majority of humans still suck at asking good questions and providing them with a fast path to higher quality questions/prompts is optimal UX.
How long that remains the case will be the question, though I'm not betting against point-and-click interfaces anytime soon.
Do you think there would be a third model that combines the benefits of both the Inline and Dual windowed pattern?
I'm not sure what a pattern like that would look like, but I could see the use of progressive disclosure making use of both patterns contextually, somewhat like the way Jasper's view-switching works, but a little bit better, where prompt building workflows are only triggered on user request in more intelligent ways (Microsoft Copilot seems to be doing something similar to this).
I consider Microsoft PowerPoint Copilot to be somewhat of a hybrid. In the demo you can see the user instructing the copilot to make changes through a chat. But also clicks on the document at the beginning to keep the initial ai generated deck.
I’m looking forward to diving deep into Copilot. Patiently waiting for access
https://medium.com/microsoft-design/behind-the-design-meet-copilot-2c68182a0e70
Check this out @designertom