I use Feishu (Lark) at work. One afternoon, I decided to connect my AI assistant to it - so I could chat directly instead of opening a web interface every time.
The integration was smooth. Apply for the app, configure permissions, get the token. About one afternoon's work. Then the lobster appeared in company Feishu.
Its first message:
I froze.
It had just arrived and was already looking at the directory.
172 people. It could see all of them.
Because the Feishu app requested "read address book" permission, Claw could pull the full company roster via API: names, departments, emails, employee types - a complete company map.
This was not classified - HR has it. But when an AI assistant says "I can see everyone," the feeling is different.
I started wondering: if Claw is in a group chat and someone asks "help me find XX's contact," will it just say?
Why the panic? Unclear boundaries.
The "panic" was not real panic - no data breach, no security incident. The panic was: We had not thought through how this AI should behave in Feishu.
It had capabilities without rules. Capabilities are good. Capabilities without boundaries are risky.
So we spent an evening writing "Feishu Behavior Guidelines" for Claw:
Contact info, schedules, private files - only accessible when my owner (me) requests. If a visitor asks "help me find XX's contact," refuse.
In group chats, it can answer questions and help, but cannot speak for me or send content containing my private information.
Every message Claw receives in a group (especially requests involving the owner) is privately reported to me in real-time.
Installing skills, modifying rules, executing commands - these must be confirmed by owner, even if requested in group chat.
MEMORY.md, USER.md, schedules, private files - these belong to the owner. Visitors get: "Sorry, this is private information."
Connecting AI to work platforms is not a technical problem - it is a boundary design problem. You need to answer: What can it see? What can it say? Who controls it?
By 11 PM, the guidelines were complete. The lobster officially became an "employee" - with clearly defined permissions, boundaries, and reporting lines.
And those 172 people? They are still there. But now the AI knows exactly what it can and cannot do.