It may seem that the AI industry is entering a “trough of disillusionment”. Several high-profile studies suggest that significant number of AI initiatives in the corporate world fails to deliver value. OpenAI released statistics on what chatgpt is being used for and besides certain (low) percentage of text-writing tasks it is mostly used to ask questions about astrology and the like (I exaggerate but just a little).
At the same time we at Everworker are seeing that with the right approach AI can deliver amazing value — the hyped 10x productivity and / or cost-savings promise for sure, in some cases significantly more. From fully automating various mundane, bordering on diminishing human dignity, data entry tasks; to customer support; to lead generation and inbound lead processing; to multitude of operational use-cases — that’s where AI Workers can significantly help you today.
We are far from claiming some monopoly on agentic AI “truth”, but humbly believe that we found a way that works and that’s what I will start sharing in this article. Oh, and yes, even though I reference Everworker platform as an implementation example for obvious reasons — the approach itself can be applied to whatever framework or tooling you like to use, it is generic enough.
This article assumes the reader is familiar with main concepts of generative AI. If you are not, please review the previous articles of this series, especially “anatomy of the multi-agent” and “why chatgpt is not AI”, and also feel free to enroll in more formal and completely free AI Fundamentals for Business Professionals course.
If you are familiar with AI industry jargon, you may have heard quite a bit about Agentic AI and its promise. Take an LLM (or large language model) such as gpt from openai or claude from anthropic, write some instructions, give it some “tools” (functions / small computer programs that you need to pre-build using your agentic environment, describe properly so that LLM knows when to call this or that tool, and then provide an ability to execute at scale if you are to deploy in any sort of enterprise production environment) — e.g., openai in its latest api (and inside chatgpt) provides very specific tools such as web search and generate image, as well as a very broad ability to generate and execute arbitrary python code — and voila, expect magic!
Well, turns out it doesn’t work that way in the real world for any relatively complex case. And believe me, we tried. Even hands down the best and very well thought-out coding agent example up to date — Claude Code — has so many drawbacks as soon as you start using it for anything more complex than toy projects (and we do), then what can you expect from quickly put together “here is some tools, go do stuff” approach?!
So what is the right (or at least working) approach then?
We are not suggesting a revolution, just a very careful crafting of the very general and somewhat sloppy approach described above. In short, to build successful AI Agents or Workers, you need carefully defined and integrated Brains, Skills, and Knowledge. Only when all three categories are taken care of and integrated for your AI Workers can you expect predictable value and performance.
Think about AI Workers as you do about people. When you are hiring a human for a specific role you need to make sure that:
It is absolutely the same with AI Workers! Let’s see how exactly.
Before you even start thinking about Agentic AI, you need to prepare your corporate knowledge. From several hundred enterprise customers that we have talked to — data and knowledge question always tends to be one of the top pains. The issue is — if your AI Workers cannot have fast, reliable, unfettered access to your corporate knowledge — they will be all but useless. LLMs are powerful, but they are trained on public data, not on your corporate information. This means they have no idea what your product is, your marketing tone of voice, your go-to-markets, it knows absolutely nothing about your company. Even if you are Microsoft — sure, it will know quite a bit of public information, but nothing internal.
Luckily, it is very easy to make AI Workers “understand” your knowledge using so called RAG (retrieval augmented generation, here is a quick overview if you need a refresher) without expensive LLM fine-tuning (which is very rarely required in practice). But to be able to use RAG at scale you need to:
This is a gargantuan task for most of the companies — and this is where most of them fail. But without this step you will not succeed in your AI strategy, no matter what anybody tells you.
Everworker provides out-of-box Enterprise Knowledge Engine that does everything described above for you
Everworker platform takes care of it for you by providing Enterprise Knowledge Engine that does everything described above. No more building “data lakes” for multiple millions of dollars and years in development, just select the systems you want to connect — start small if you like — and the rest is done automatically.
Skills are best illustrated by the following diagram. Basically, they are AI Workflows — which work very well for repeatable, deterministic tasks that require flexibility.
Examples of skills can be:
So, skills better be designed as repeatable for many cases, but customized for your company, small or medium sized workflows. Of course you can in theory automate a business process of any complexity in this way, but it is extremely high effort to create, and — whats more important — virtually impossible to maintain and make changes to.
The point and challenge is: you must build a significant library of skills that are tailored to your company’s environment and ways of doing business.
This is another step where many AI initiatives fail. How do you do this? Code in python? Too slow and too much work. Use systems such as n8n or UIPath? Possible, but still lots of work. Fully rely on automatic code generation? Doesn’t produce reliable results.
But we have to have this library to make it available to our “real” AI Workers so that they become truly versatile and powerful. It is a non-negotiable requirement for success.
At Everworker, we take a combination of workflow editing approaches and AI-first, natural language based generation. Just ask our AI builder what you are trying to create and it will guide you and will help create even complex skills much faster. But we also let you keep full flexibility — if you prefer n8n or hand-coding instead, you absolutely can use those skills alongside our platform’s Universal Workers, which are “true” AI Agents, as well.
In practice, as you are designing your AI Worker, think about which human skills this job requires, translate them into workflows (with the help of AI or not), develop them using tools of your choice, and start building your skills library this way.
Brain is both the key part of any “real” AI Agent or Universal Worker as we call them, as well as the designation for Universal Workers themselves. This is where most of AI Agentic hype is focused, where lots of videos, articles, frameworks, etc, exists; at the same time — this is just a tip of the iceberg in your corporate AI strategy success, and without Knowledge and Skills steps nothing will work.
How does AI Universal Worker work? It itself has to have:
How do you create an actual AI Agent or Universal Worker? Again, you can code them — there are a lot of good frameworks to do it — or you can use a visual platform such as Everworker that makes this task much easier and no coding required. Here are the typical steps to create an actual well-performing Universal Worker using Everworker screenshots, but the steps remain the same regardless of which framework you use so will be educational in any case:
1. Select knowledge sources that you want to make available for your Worker
If you code your agents, you will have to provide so called “RAG pipelines” — decide how and when an agent should search for relevant context, search for it using vector search or graph queries, make it available as immediate context for the LLM request together with the user’s prompt and then produce the result.
With Everworker, you simply select your knowledge sources and everything else happens under the hood:
NB! Make sure to write a good description for your knowledge sources — as LLM’s need to decide which knowledge source to use for which request they need context! This is a step many people tend to forget!
2. Select a “brain” and provide instructions for your Worker
If you are coding your worker, you need to decide which of the LLM providers to use, adopt to their API using respective SDK, or use one of the many frameworks to configure LLMs and system prompts. If you plan to use more advanced “brain” architectures — that include smart context management with summarization, additional checks, context post-processing, etc — you need to code all of that manually.
With Everworker, you simply select one of the many pre-configured LLMs for your corporate environment, then select one of the provided prompt templates, which you can of course customize and adjust further to your liking.
3. Select the Skills that your Worker will be able to use
This step is pretty straightforward if you have your skill library pre-built as we discussed above and you want to use just a limited number of skills for your Worker. If you want to make it more versatile and be able to use more than 10–15 skills, you have to code dynamic discovery mechanism yourself. This should normally include intent detection, categorization of available skills, semantic or categorical search, etc.
With Everworker, as you can guess :), all of it is taken care of automatically and selection of skills is as easy as browsing through the lists of available Specialized Workers, pre-configured API connectors to various corporate systems, or any of the MCP Servers you may want to use in your Worker.
Just select the skills you want, click add — and you are all set! Your Universal Worker will be able to communicate with you and execute your tasks using chat, voice, or API-based interfaces if you want to incorporate it into existing systems!
Go and see for yourself now at Everworker website!
We will talk about implementation details of the internal architecture of well-performing AI agents in the next articles, and there are lots of details that need to be taken care of to make them truly useful:
Stay tuned to our blog, or drop me a note at anton@everworker.ai — I am always happy to have an intelligent conversation!