-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cachedGroupMetadata error #1166
Comments
And closing prekey on logs |
if you are doing literally this: ```
|
Can you show example?? Please |
const { default: makeWASocket, useMultiFileAuthState } = require("@whiskeysockets/baileys");
const pino = require("pino");
(async () => {
const { state, saveCreds } = await useMultiFileAuthState("./auth");
// Group metadata cache
const groupMetadataCache = new Map();
// Initialize the socket with cachedGroupMetadata in config
const sock = makeWASocket({
auth: state,
logger: pino({ level: "silent" }),
printQRInTerminal: true,
cachedGroupMetadata: async (jid) => {
// Check if metadata exists in the cache
if (groupMetadataCache.has(jid)) {
return groupMetadataCache.get(jid);
}
try {
// Fetch metadata if not in cache
const metadata = await sock.groupMetadata(jid);
groupMetadataCache.set(jid, metadata); // Cache it
return metadata;
} catch (err) {
console.error(`Failed to fetch metadata for group ${jid}:`, err);
return null;
}
},
getMessage: async (key) => {
// Implement logic for fetching messages, if needed.
return { conversation: "Message not found" };
},
});
// Save credentials on update
sock.ev.on("creds.update", saveCreds);
// Example: Listening to group messages and using cached metadata
sock.ev.on("messages.upsert", async (m) => {
const msg = m.messages[0];
if (msg.key.remoteJid.endsWith("@g.us")) {
const metadata = await sock.cachedGroupMetadata(msg.key.remoteJid);
console.log("Group Metadata:", metadata);
}
});
console.log("Socket initialized and ready to use.");
})(); |
I'm using the same logic but using my store. I'm storing group data on event upsert then loading it via function. |
That's the same thing I did too, I'm just showing a demo using memory map cache as example, it's quick fast and easy to implement for beginners |
One question, what is cachedGroupMetadata for? |
It's for reducing the frequent calls for group metadata to WhatsApp
…On Tue, 24 Dec 2024, 5:50 am Salientekill, ***@***.***> wrote:
One question, what is cachedGroupMetadata for?
—
Reply to this email directly, view it on GitHub
<#1166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A56ZLXMGSUK3KSRJMKOYCA32HCSGLAVCNFSM6AAAAABTWG5LCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRQGQ3DEOBYHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
exactly, and avoiding the rate-limit |
It doesn't get avoided though, it's just a myth.
…On Tue, 24 Dec 2024, 1:26 pm AstroX10, ***@***.***> wrote:
exactly, and avoiding the rate-limit
—
Reply to this email directly, view it on GitHub
<#1166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A56ZLXP67LCDDK5XHVXLL5T2HEHTVAVCNFSM6AAAAABTWG5LCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRQG44TONZXGQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
After i updated to the latest release. await sock.groupFetchAllParticipating(); stopped working. has anyone been able to show group list after updating? |
It works, just wait for the socket to run for a while |
Oh Okay thank you |
Final Fix import { saveGroupMetadata } from '#sql'; //Apply at your own end
/**
* Configuration for rate limiting and queue processing
* @typedef {Object} Config
* @property {number} INITIAL_DELAY - Initial delay between updates (10 minutes)
* @property {number} PROCESS_DELAY - Delay between processing groups (5 seconds)
* @property {number} RATE_LIMIT_DELAY - Delay after hitting rate limit (15 minutes)
* @property {number} MAX_CONCURRENT - Maximum concurrent processes
* @property {number} RETRY_DELAY - Delay between retries (3 minutes)
* @property {number} MAX_RETRIES - Maximum retry attempts
*/
const CONFIG = {
INITIAL_DELAY: 600000,
PROCESS_DELAY: 5000,
RATE_LIMIT_DELAY: 900000,
MAX_CONCURRENT: 1,
RETRY_DELAY: 180000,
MAX_RETRIES: 3,
};
/**
* Handles rate limiting and queue management for group metadata updates
*/
class RateLimitHandler {
constructor() {
this.queue = new Map();
this.processing = false;
this.retryCount = 0;
this.lastProcessTime = 0;
}
/**
* Delays execution for specified milliseconds
* @param {number} ms - Milliseconds to delay
*/
async delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
/**
* Processes a single group's metadata
* @param {string} jid - Group JID
* @param {Object} conn - Connection object
*/
async processGroup(jid, conn) {
try {
const now = Date.now();
const timeSinceLastProcess = now - this.lastProcessTime;
if (timeSinceLastProcess < CONFIG.PROCESS_DELAY) {
await this.delay(CONFIG.PROCESS_DELAY - timeSinceLastProcess);
}
console.log(`Processing group: ${jid}`);
await saveGroupMetadata(jid, conn);
this.lastProcessTime = Date.now();
console.log(`Successfully processed group: ${jid}`);
return true;
} catch (error) {
if (error?.data === 429) {
console.log(`Rate limit hit for group: ${jid}`);
throw error;
}
console.log(`Error processing group ${jid}: ${error.message}`);
return false;
}
}
/**
* Processes the queue of groups
* @param {Object} conn - Connection object
*/
async processQueue(conn) {
if (this.processing) {
console.log('Queue is already being processed');
return;
}
this.processing = true;
console.log('Starting queue processing');
try {
for (const [jid, retries] of this.queue.entries()) {
if (retries >= CONFIG.MAX_RETRIES) {
console.log(`Max retries reached for group: ${jid}`);
this.queue.delete(jid);
continue;
}
try {
await this.processGroup(jid, conn);
this.queue.delete(jid);
} catch (error) {
if (error?.data === 429) {
console.log(`Rate limit encountered, pausing for ${CONFIG.RATE_LIMIT_DELAY / 1000}s`);
await this.delay(CONFIG.RATE_LIMIT_DELAY);
this.queue.set(jid, retries + 1);
break;
}
}
await this.delay(CONFIG.PROCESS_DELAY);
}
} finally {
this.processing = false;
console.log('Queue processing completed');
}
}
}
/**
* Updates group metadata with rate limiting
* @param {Object} msg - Message object
*/
export const updateGroupMetadata = async msg => {
const conn = msg.client; // your baileys connection
const handler = new RateLimitHandler();
const updateGroups = async () => {
try {
console.log('Fetching participating groups');
const groups = await conn.groupFetchAllParticipating();
if (!groups) {
console.log('No groups found');
return;
}
const groupIds = Object.keys(groups);
console.log(`Found ${groupIds.length} groups`);
for (const jid of groupIds) {
if (!handler.queue.has(jid)) {
handler.queue.set(jid, 0);
}
}
await handler.processQueue(conn);
} catch (error) {
console.log(`Error in updateGroups: ${error.message}`);
if (error?.data === 429) {
await handler.delay(CONFIG.RATE_LIMIT_DELAY);
}
}
};
await handler.delay(CONFIG.INITIAL_DELAY);
await updateGroups();
setInterval(updateGroups, CONFIG.INITIAL_DELAY);
}; |
@AstroX11 Thank you, i'll check and give you feedback |
Ho
How was it? |
I've not checked, that would be tomorrow when i settle down. i'll update you |
i'm facing a diffrent problem entirely now. i couldn't get connection to run for enought time to load groups. : const settingsPath = path.join(__dirname, 'settings.json'); const mongoURL = mongoUri; const mongoClient = new MongoClient(mongoURL, { useUnifiedTopology: true }); const instances = {};
} async function connectionLogic(instanceID) {
} async function deleteInstance(instanceID) {
} ERROR: Has anyone faced this? |
Check whether you are logged in or not
…On Mon, 6 Jan 2025, 3:04 am Positive John, ***@***.***> wrote:
i'm facing a diffrent problem entirely now. i couldn't get connection to
run for enought time to load groups. :
const { makeWASocket, DisconnectReason } =
***@***.***/baileys");
const useMongoDBAuthState = require("./mongoAuthState");
const path = require('path');
const fs = require('fs');
const settingsPath = path.join(__dirname, 'settings.json');
const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf-8'));
const mongoUri = settings.mongoUriInstances;
const mongoURL = mongoUri;
const { MongoClient } = require("mongodb");
const mongoClient = new MongoClient(mongoURL, { useUnifiedTopology: true
});
let chatHistoryDB;
let mongoReady = false;
const instances = {};
async function connectMongoDB() {
if (mongoReady) return;
try {
await mongoClient.connect();
const dbNames = await mongoClient.db().admin().listDatabases();
const dbExists = dbNames.databases.some(db => db.name === 'ChatHistory');
if (!dbExists) {
console.log("ChatHistory database doesn't exist, creating...")
chatHistoryDB = mongoClient.db('ChatHistory');
}else {
chatHistoryDB = mongoClient.db('ChatHistory');
}
mongoReady = true;
console.log("Connected to MongoDB for chat history and instance data.");
} catch (error) {
console.error("Error connecting to MongoDB:", error);
mongoReady = false;
}
}
async function connectionLogic(instanceID) {
if (!instanceID) {
console.error('Error: instanceID is undefined or invalid. Cannot initiate
instance.');
return;
}
try {
await connectMongoDB(); // Ensure MongoDB is connected before proceeding
if (!mongoReady) {
console.error('MongoDB is not ready. Cannot initiate connection.');
return;
}
const contactSaverCollection = mongoClient.db("ContactSaver").collection(`${instanceID}`);
const { state, saveCreds } = await useMongoDBAuthState(contactSaverCollection, instanceID);
const sock = makeWASocket({
printQRInTerminal: true,
mobile: false,
auth: state,
browser: [instanceID, 'Chrome', '131.0.6778.205'],
maxMsgRetryCount: 3,
qrTimeout: 30000,
generateHighQualityLinkPreview: true,
});
instances[instanceID] = {
sock: sock,
qrAttempts: 0,
reconnectionAttempts: 0,
connectionClosed: false,
mutex: Promise.resolve(),
isProcessing: false,
}
const withLock = async (fn) => {
instances[instanceID].mutex = instances[instanceID].mutex.then(() => {
return fn();
});
try {
await instances[instanceID].mutex;
} catch (e) {
console.error("Error executing with lock: ", e);
throw e;
}
};
const retrySendMessage = async (instanceID, to, text, read, retryCount = 0) => {
const maxRetries = 3;
try {
await sendTextMessage(instanceID, to, text, read);
console.log(`Successfully sent message (after ${retryCount} retries)`);
}
catch (error) {
if (retryCount < maxRetries) {
console.log(`Failed to send message, retrying in 3 seconds (attempt ${retryCount + 1})`, error);
await new Promise(resolve => setTimeout(resolve, 3000));
await retrySendMessage(instanceID, to, text, read, retryCount + 1); // Recursive retry
}
else {
console.error(`Failed to send message after ${maxRetries} retries`, error);
}
}
};
sock.ev.on('connection.update', async (update) => {
const { connection, lastDisconnect, qr, isNewLogin, receivedPendingNotifications } = update;
instances[instanceID].connection = connection;
instances[instanceID].lastDisconnect = lastDisconnect;
if (connection === 'close') {
const { handleWebhook } = require('./webhook');
handleWebhook('connection.update', instanceID, { connection, lastDisconnect, qr, isNewLogin, receivedPendingNotifications });
const shouldReconnect = lastDisconnect?.error && lastDisconnect.error.isBoom && lastDisconnect.error.output.statusCode !== DisconnectReason.loggedOut;
if (instances[instanceID].connectionClosed) {
console.log(`Instance ${instanceID} is permanently closed and will not reconnect.`);
try {
await sock.ws.close();
await sock.ws.terminate();
} catch (wsError) {
console.error(`Error closing WebSocket for instance ${instanceID}:`, wsError);
}
return;
}
if(instances[instanceID].qrAttempts >= 5 && instances[instanceID].reconnectionAttempts >= 5){
instances[instanceID].connectionClosed = true;
console.log(`Max QR and reconnection attempts reached for instance ${instanceID}. Connection will not be restablished.`);
try {
await sock.ws.close();
await sock.ws.terminate();
} catch (wsError) {
console.error(`Error closing WebSocket for instance ${instanceID}:`, wsError);
}
return;
}
if(shouldReconnect){
instances[instanceID].reconnectionAttempts++;
console.log(`Reconnecting instance ${instanceID}... attempt number ${instances[instanceID].reconnectionAttempts}`);
setTimeout(() => connectionLogic(instanceID), 3000);
} else if (lastDisconnect?.error?.output?.statusCode === DisconnectReason.loggedOut){
console.log(`Instance ${instanceID} logged out and will not reconnect.`);
instances[instanceID].connectionClosed = true;
try {
await sock.ws.close();
await sock.ws.terminate();
} catch (wsError) {
console.error(`Error closing WebSocket for instance ${instanceID}:`, wsError);
}
}
}
else if (connection === 'open') {
console.log(`Instance ${instanceID} connected`);
instances[instanceID].qrAttempts = 0;
instances[instanceID].reconnectionAttempts = 0;
}
if (qr) {
if (instances[instanceID].qrAttempts < 5) {
instances[instanceID].qr = qr;
instances[instanceID].qrAttempts++;
console.log(`QR code generated for ${instanceID}, attempt ${instances[instanceID].qrAttempts}`)
} else {
console.log(`Max QR attempts reached for ${instanceID}. No more QR codes will be generated.`);
}
}
instances[instanceID].isNewLogin = isNewLogin;
instances[instanceID].receivedPendingNotifications = receivedPendingNotifications;
});
sock.ev.on('messages.upsert', async ({ messages }) => {
const { handleWebhook } = require('./webhook');
withLock(async() =>{
try{
for (const message of messages) {
if (message.key.remoteJid === ***@***.***') {
try{
handleWebhook('newMessage', instanceID, { messages: [message] });
// Optionally save status messages to chat history if needed
// const {key, messageTimestamp, pushName, ...statusMessage} = message
// await instances[instanceID].chatHistoryCollection.insertOne({...statusMessage});
}catch(e){
console.error("Error processing status messages: ", e.message, e.stack);
}
}else {
handleWebhook('newMessage', instanceID, { messages: [message] });
//Save all messages into ChatHistory
// const {key, messageTimestamp, pushName, ...chatMessage} = message;
// await instances[instanceID].chatHistoryCollection.insertOne({key, messageTimestamp, pushName, ...chatMessage});
}
}
} catch(err){
console.error("Error upserting message: ", err.message, err.stack);
}
});
});
sock.ev.on('chats.upsert', async ({ Chat }) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
handleWebhook('newChat', instanceID, { Chat: Chat });
} catch (err) {
console.error("Error on chats.upsert: ", err.message, err.stack);
}
});
});
sock.ev.on('group-participants.update', async (groupusers) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
handleWebhook('groupUsers', instanceID, groupusers);
} catch (err) {
console.error("Error on group-participants.update: ", err.message, err.stack);
}
});
});
sock.ev.on('groups.upsert', async (groups) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
handleWebhook('groupupsert', instanceID, groups);
} catch (err) {
console.error("Error on groups.upsert: ", err.message, err.stack);
}
});
});
sock.ev.on('groups.update', async (groups) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
handleWebhook('groupupdate', instanceID, groups);
} catch (err) {
console.error("Error on groups.update: ", err.message, err.stack);
}
});
});
sock.ev.on('contacts.upsert', async (event) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
const contacts = event.contacts || event;
if (contacts && contacts.length > 0) {
handleWebhook('contacts', instanceID, contacts);
}
} catch (err) {
console.error("Error on contacts.upsert: ", err.message, err.stack);
}
});
});
sock.ev.on('contacts.update', async (contactsupdate) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
if (contactsupdate && contactsupdate.length > 0) {
handleWebhook('contactupdate', instanceID, contactsupdate);
}
} catch (err) {
console.error("Error on contacts.update: ", err.message, err.stack);
}
});
});
sock.ev.on('call', async (callEvent) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
handleWebhook('call', instanceID, callEvent);
} catch(err){
console.error("Error on call: ", err.message, err.stack);
}
});
});
sock.ev.on('creds.update', async (mycreds) => {
const { handleWebhook } = require('./webhook');
withLock(async() => {
try {
handleWebhook('mycreds', instanceID, mycreds);
} catch(err){
console.error("Error on creds.update: ", err.message, err.stack);
}
});
});
sock.ev.on('creds.update', saveCreds);
await mongoClient.db("ContactSaver").collection("instances").updateOne(
{ instanceID },
{ $set: { instanceID } },
{ upsert: true }
);
return sock;
} catch (error) {
console.error('Error during connectionLogic', error)
}
}
async function deleteInstance(instanceID) {
try {
if (!instances[instanceID]) {
console.log(Instance ${instanceID} not found in memory.);
return false;
}
const sock = instances[instanceID].sock;
if (!sock || !sock.ev || !sock.ws) {
return false;
}
sock.ev.removeAllListeners();
try {
await sock.ws.close();
await sock.ws.terminate();
} catch (wsError) {
console.error(Error closing WebSocket for instance ${instanceID}:,
wsError);
}
try {
const contactSaverCollection = mongoClient.db("ContactSaver").collection(
${instanceID});
await contactSaverCollection.drop();
// const chatHistoryCollection = chatHistoryDB.collection(
chat_history_${instanceID});
// await chatHistoryCollection.drop();
} catch (dbError) {
console.error(`Error dropping MongoDB collection for instance ${instanceID}:`, dbError);
}
delete instances[instanceID];
try {
await mongoClient.db("ContactSaver").collection("instances").deleteOne({ instanceID });
} catch (deleteError) {
console.error(`Error deleting instance record from MongoDB for instance ${instanceID}:`, deleteError);
}
console.log(`Instance ${instanceID} deleted from MongoDB and removed from memory.`);
return true;
} catch (error) {
console.error(`Error during instance deletion for ${instanceID}:`, error);
throw new Error(`Unable to delete instance ${instanceID}`);
}
}
// Export connectMongoDB and other necessary functions
module.exports = {
connectionLogic,
instances,
mongoClient,
deleteInstance,
connectMongoDB, // Correctly export the function
// getDb,
// isMongoReady
};
------------------------------
ERROR:
Uncaught exception: Error: Connection Closed
at sendRawMessage
***@***.***
\baileys\lib\Socket\socket.js:57:19)
at sendNode
***@***.***
\baileys\lib\Socket\socket.js:76:16)
at
***@***.***
\baileys\lib\Socket\messages-send.js:472:19
at process.processTicksAndRejections
(node:internal/process/task_queues:95:5)
at async Object.transaction
***@***.***
\baileys\lib\Utils\auth-utils.js:135:26)
at async relayMessage
***@***.***
\baileys\lib\Socket\messages-send.js:306:9)
at async sendPeerDataOperationMessage
***@***.***
\baileys\lib\Socket\messages-send.js:237:23)
at async sendRetryRequest
***@***.***
\baileys\lib\Socket\messages-recv.js:97:27)
at async
***@***.***
\baileys\lib\Socket\messages-recv.js:638:33
at async
***@***.***\baileys\lib\Utils\make-mutex.js:19:36
{
data: null,
isBoom: true,
isServer: false,
output: {
statusCode: 428,
payload: {
statusCode: 428,
error: 'Precondition Required',
message: 'Connection Closed'
},
headers: {}
}
}
Uncaught exception: Error: Connection Closed
at sendRawMessage
***@***.***
\baileys\lib\Socket\socket.js:57:19)
at sendNode
***@***.***
\baileys\lib\Socket\socket.js:76:16)
at
***@***.***
\baileys\lib\Socket\messages-send.js:472:19
at process.processTicksAndRejections
(node:internal/process/task_queues:95:5)
at async Object.transaction
***@***.***
\baileys\lib\Utils\auth-utils.js:135:26)
at async relayMessage
***@***.***
\baileys\lib\Socket\messages-send.js:306:9)
at async sendPeerDataOperationMessage
***@***.***
\baileys\lib\Socket\messages-send.js:237:23)
at async sendRetryRequest
***@***.***
\baileys\lib\Socket\messages-recv.js:97:27)
at async
***@***.***
\baileys\lib\Socket\messages-recv.js:638:33
at async
***@***.***\baileys\lib\Utils\make-mutex.js:19:36
{
data: null,
isBoom: true,
isServer: false,
output: {
statusCode: 428,
payload: {
statusCode: 428,
error: 'Precondition Required',
message: 'Connection Closed'
},
headers: {}
}
}
MongoDB connection closed.
[nodemon] app crashed - waiting for file changes before starting...
------------------------------
Has anyone faced this?
—
Reply to this email directly, view it on GitHub
<#1166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A56ZLXJUXHSIAIN4XRBKA5L2JGQPJAVCNFSM6AAAAABTWG5LCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZRG42TONRWGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I'm logged in but it keeps logging out and on continuously thus not stabled |
Describe the bug
I updated cachedGroupMetadata in socket but it made the situation worse bots ain't even responding now . Just stuck in loop "cachedGroupMetadata function called with chat:" and "getMessage function called with chat:"
My socket
const connectionOptions = {
version: version,//[2, 3000, 1015901307],// version,
logger: Pino({
level: 'silent',
}),
printQRInTerminal: true,
browser: Browsers.ubuntu("Chrome")|| ["Ubuntu", "Edge", "110.0.1587.56"] || Browsers.macOS("Safari"),
auth: {
creds: state.creds,
keys: makeCacheableSignalKeyStore(state.keys, pino({ level: "fatal" }).child({ level: "fatal" })),
},
generateHighQualityLinkPreview: true,
getMessage : async (key) => {
// console.log("getMessage function called with key:", key); // Log function call
console.log("getMessage function called with key:"); // Log function call
},
cachedGroupMetadata : async (chat) => {
console.log("cachedGroupMetadata function called with chat:", chat); // Log function call
},
//emitOwnEvents: true, //Don't know the effects
patchMessageBeforeSending: message => {
const requiresPatch = !!(
message.buttonsMessage ||
message.templateMessage ||
message.listMessage
)
if (requiresPatch) {
message = {
viewOnceMessage: {
message: {
messageContextInfo: {
deviceListMetadataVersion: 2,
deviceListMetadata: {},
},
...message,
},
},
}
}
},
msgRetryCounterCache,
defaultQueryTimeoutMs: undefined,
}
I'm using
Baileys: 6.7.9 version
Os Ubuntu.
The text was updated successfully, but these errors were encountered: