You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issue Summary
When testing a bot that replies to its followers, I noticed that it sometimes generates multiple replies to a single tweet. The API logs indicate that the bot only makes two calls per response (one for the medium model and one for the large model). However, due to DEFAULT_MAX_TWEET_LENGTH, the generated response is split into multiple tweets to fit within the 280-character limit.
This is usually a minor compliance issue, but in some cases, the agent detects these split responses as incomplete sentences and attempts to generate additional replies repeatedly. As a result, the API sometimes makes 4 to 6 calls instead of 2, leading to excessive LLM token usage and increased costs.
Steps to Reproduce
Enable the bot to reply to follower tweets.
Observe API logs, which show only two calls being made.
Due to DEFAULT_MAX_TWEET_LENGTH, the reply is split into multiple tweets.
In some cases, the agent perceives the split content as incomplete and generates additional responses.
The bot may end up making 4–6 API calls per tweet, leading to unnecessary token consumption.
Expected Behavior
The bot should generate only one response per tweet and appropriately split it if needed, without triggering unnecessary additional responses.
Actual Behavior
The bot sometimes misinterprets split responses as incomplete and keeps generating additional replies, leading to multiple API calls (4–6 instead of 2).
Impact
High token usage, leading to increased API costs.
Potential rate limits or compliance issues with excessive replies to a single tweet.
Temporary Fixes
set DEFAULT_MAX_TWEET_LENGTH to much higher number(like 500)
I think this is mainly from interation.ts and continue.ts.
Screenshots
The text was updated successfully, but these errors were encountered:
Eliza Version
latest v0.1.9
Issue Summary
When testing a bot that replies to its followers, I noticed that it sometimes generates multiple replies to a single tweet. The API logs indicate that the bot only makes two calls per response (one for the medium model and one for the large model). However, due to
DEFAULT_MAX_TWEET_LENGTH
, the generated response is split into multiple tweets to fit within the 280-character limit.This is usually a minor compliance issue, but in some cases, the agent detects these split responses as incomplete sentences and attempts to generate additional replies repeatedly. As a result, the API sometimes makes 4 to 6 calls instead of 2, leading to excessive LLM token usage and increased costs.
Steps to Reproduce
DEFAULT_MAX_TWEET_LENGTH
, the reply is split into multiple tweets.Expected Behavior
Actual Behavior
Impact
Temporary Fixes
DEFAULT_MAX_TWEET_LENGTH
to much higher number(like 500)I think this is mainly from
interation.ts
andcontinue.ts
.Screenshots
The text was updated successfully, but these errors were encountered: