David Galbraith:

AI buttons are different from, say Photoshop menu commands in that they can just be a description of the desired outcome rather than a sequence of steps (incidentally why I think a lot of agents’ complexity disappears). For example Photoshop used to require a complex sequence of tasks (drawing around elements with a lasso etc.) to remove clouds from an image. With AI you can just say ‘remove clouds’ and then create a remove clouds button. An AI interface is a ‘semantic interface’.

Matt Webb:

So removing the interface bureaucracy is not about simplicity but about increasing expressiveness and capability. What does it look like if we travel down the road of intent-maxing. There’s a philosophy from the dawn of computing, DWIM a.k.a. Do What I Mean.

DWIM, as described on Wikipedia:

DWIM (do what I mean) computer systems attempt to anticipate what users intend to do, correcting trivial errors automatically rather than blindly executing users’ explicit but potentially incorrect input.

Nan Yu:

To really take advantage of what AI can do, developers need to start thinking about software differently — not as the means to but as the ends. Don’t deliver tools that allow users to achieve outcomes. Just deliver the outcomes.