Design pattern: A hood to look under
Technology is getting better at doing things on behalf of its users. "Don't worry about that," it says, "Tell me what you want, and I'll do the rest." (Read more about how tech is shifting users from task-doers to flow-managers at Treating users (Like a Boss.))
This trend is great because it saves users tedious work that computers are better at doing. But people aren't comfortable just giving control over to a system, especially when it's an opaque "black box" of a function that just provides the end result. Like a car, users need a hood to look under to build enough trust before they will close it up and get back behind the wheel.
Problem: Trust in a new system is never automatic
Spam filters are a great example. Filters are getting really good at finding those VIAAAGRAH CHEEEP emails and automatically tucking them in the trash. Even if spam filters were perfect, with 100% accuracy, new users are stuck wondering: are there any "false positives"? Some important message wrongly marked as spam, the deletion of which comes with consequence? What if, to extend the scenario, a friend on vacation got a hilarious photograph of a street vendor selling Tic-Tacs as cheap Viagra, and expects a LOL from me in response?
It takes a while of letting the system run and personally vetting the results before a user learns to trust the algorithm. During this time, if the system isn't perfect, the user can help improve it by refining parameters, like whitelisting that vacationing friend, and blacklisting some of the spam that did happen to make it through to my inbox.
Solution: Let the user see your work
When building interfaces that include these kinds of agents, first make sure it's clear that there's an agent at work and what it's trying to do. Let the user know when it's working and provide easy access to check the results. Make those results easy to understand and check for bad results.
At first only commit to results after the user has approved them. Provide tools to customize and improve the algorithm. Once trust is built, provide easy means to "close the hood" and let the agent continue to do its work in a more automated or unobtrusive way.
Giving users a hood to look under will help them learn new a agent at their own pace, tweak it, and trust it, easing into the new gear and faster speed.