Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected Behavior: Interrupt() when invoked for the second time, failed to wait for the user input #792

Open
shtanko-michael opened this issue Jan 21, 2025 · 0 comments

Comments

@shtanko-michael
Copy link

Description
I am using interrupt function inside a subgraph, the first time it works as expected, it terminates the graph execution and I can type text in the console. But after I resume the graph and call interrupt for the second (or more) time, it returns the same previously cached value. So it does not work as expected.
When I remove the subgraph and impletemt the same logic inside parent graph, it works fine, so I think it is a bug, and also I found some discussions here on Github explaining this behavior and a way to fix it, hope this helps!

langchain-ai/langgraph#3072
langchain-ai/langgraph@main...vigneshmj1997:langgraph:interrupt_patch_for_subgraph

As far as I understand it fixed in 0.2.63 in Python Langgraph repo

Code

import { StateGraph, Annotation, START, MessagesAnnotation, Command, interrupt, MemorySaver, END } from "@langchain/langgraph";
import dotenv from "dotenv";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import readline from 'readline';
import { v4 } from 'uuid';

let rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout
});

dotenv.config();

const SugGraphStateAnnotation = Annotation.Root({
    ...MessagesAnnotation.spec,
    counter: Annotation<number>,
    caller: Annotation<string | null>
});

const askHuman = async (state: typeof SugGraphStateAnnotation.State) => {
    const humanMessage = interrupt({
        'id': v4(),
        'request': state.messages[state.messages.length - 1].content,
    }) as string;

    console.log('Execution=', state.counter, 'Human message=', humanMessage);

    return new Command({
        goto: state.caller ?? END,
        update: {
            messages: [new HumanMessage({ content: humanMessage, name: 'askHuman' })],
            caller: null
        },
    });
}

const initial = async (state: typeof SugGraphStateAnnotation.State) => {
    return new Command({
        goto: 'askHuman',
        update: {
            caller: 'initial',
            counter: state.counter + 1,
            messages: [new AIMessage({ content: 'Question to ask user', name: 'initial' })]
        }
    });
};

const subGraphWorkflow = new StateGraph(SugGraphStateAnnotation)
    .addNode("askHuman", askHuman)
    .addNode("initial", initial, { ends: ['askHuman', END] })
    .addEdge(START, "initial");

const subGraph = subGraphWorkflow.compile();


const ParentStateAnnotation = MessagesAnnotation;
const router = (state: typeof ParentStateAnnotation.State) => {
    return new Command({
        goto: 'subGraph',
        update: {
            caller: 'router',
            messages: [new AIMessage({ content: 'Go to subgraph', name: 'router' })]
        }
    });
}

const parentGraphWorkflow = new StateGraph(ParentStateAnnotation)
    .addNode("router", router, { ends: ['subGraph'] })
    .addNode("subGraph", subGraph)
    .addEdge(START, "router");

const checkpointer = new MemorySaver();

const parentGraph = parentGraphWorkflow.compile({ checkpointer });

const config = { configurable: { thread_id: "console_app" } };

async function main() {
    try {
        await parentGraph.invoke({
            messages: [
                new HumanMessage("Inital message")
            ]
        }, config);

        const state = await parentGraph.getState(config);
        console.log((JSON.stringify(state.values, null, 2)));

        while (true) {
            const answer = await askQuestion();

            await resumeGraph(answer);
        }
    } catch (error) {
        console.error("An error occurred:", error);
    } finally {
        rl.close();
    }
}

async function askQuestion(): Promise<string> {
    return new Promise(function (resolve, reject) {
        rl.question("\User answer: ", async function (answer) {
            resolve(answer);
            rl.close();
        });
    });
}

async function resumeGraph(answer: string) {
    await parentGraph.invoke(new Command({ resume: answer }), config)
}

main();

Output

User answer: test answer
Execution= 1 Human message= test answer
Execution= 2 Human message= test answer
Execution= 3 Human message= test answer
Execution= 4 Human message= test answer
Execution= 5 Human message= test answer
Execution= 6 Human message= test answer
Execution= 7 Human message= test answer
Execution= 8 Human message= test answer
Execution= 9 Human message= test answer
Execution= 10 Human message= test answer
Execution= 11 Human message= test answer
Execution= 12 Human message= test answer
Execution= 13 Human message= test answer
Execution= 14 Human message= test answer

An error occurred: GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition. You can increase the limit by setting the "recursionLimit" config key.

Troubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT/

    at CompiledStateGraph._runLoop (file:///C:/projects/talkapp-ai/langchain/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__/node_modules/@langchain/langgraph/dist/pregel/index.js:887:23)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async createAndRunLoop (file:///C:/projects/talkapp-ai/langchain/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected]__/node_modules/@langchain/langgraph/dist/pregel/index.js:980:17) {
  lc_error_code: 'GRAPH_RECURSION_LIMIT',
  pregelTaskId: 'd90e7254-46a7-582c-90d7-1fea82c89f52'
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant