Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

phd placeholder: "Decentralized Machine Learning Systems for Information Retrieval" #7290

Open
synctext opened this issue Feb 10, 2023 · 65 comments
Assignees

Comments

@synctext
Copy link
Member

synctext commented Feb 10, 2023

< Placeholder >
timeline: April 2023 - April 2027.

Key historical 2016 issue of thesis topic

ToDo: 6 weeks hands-on Python onboarding project. Learn a lot and plan to throw it away. Next step is then to start working on your first scientific article. You need 4 thesis chapters and you then completed your phd.

One idea: towards trustworthy and perfect metadata for search (e.g. perfect memory retrieval for the global brain #7064 ).
Another idea: Gradient decent model takes any keyword search query as input. Output is a limited set of vectors. Only valid content is recommended. Learning goal is to provide semantic matching between input query and output vector.
General background and Spotity linkage possible dataset sharing

Max. usage of expertise: product/market fit thinking

Background reading:

Venues:

Possible scientific storyline: SearchZero a decentralised, self-supervised search engine with continuous learning

@mg98
Copy link
Contributor

mg98 commented Apr 24, 2023

Ideas...

  • "Perfect search" as an instance of unsupervised online learning-to-rank (OL2R) (using deep neural networks). The ML model should use user signals (i.e., clicks/downloads) as a measure to learn query-document relevancy and accordingly adapt its weights.
    • I think the employment of NN makes sense as we have very limited metadata/understanding about each torrent, much unlike is the case with web pages
    • State-of-the-art for OL2R: Duel Bandit Gradient Descent (DBGD) (listwise approach), (placeholder)
    • Potential research lines:
      • Analysis of pointwise, pairwise, & listwise OL2R approaches for their application on distributed or decentralized systems; How do the tradeoffs added by listwise approaches scale with the constraints of robustness and efficiency in a dist./dec. network? Or more specific: how would DBGD work out in this setting?
      • Alternatively, ...
        • Decentralized AI-powered information retrieval using something as simple as, e.g., BM25 ranking (that is, without NN)
        • Decentralized embeddings for semantic search

@synctext
Copy link
Member Author

synctext commented Apr 26, 2023

Brainstorm: We present a decentralised search engine, based on deep learning of URLs, URIs, magnet links, and IPFS links. Deep learning has been successfully used in the past to identify trusted and malicious websites. We go beyond this prior work and present an experimental search engine based on fully decentralised unsupervised learning. Our fuzzy search algorithm is based on unsupervised online learning-to-rank (OL2R).
Left for future work: beyond static item list (no content discovery, currently unable to add new content items), reputation & trust for spam filtering (also required for trustworthy content discovery).

Literature, Dataset and code:

@mg98
Copy link
Contributor

mg98 commented May 2, 2023

Spent the last week doing basic courses on neural networks again. Trying to get a linear regression model running to predict a sine function (using SGD). As basic as this is, implementing it is not as easy as I would have expected 🥲
EDIT: Probably because a sine function is not exactly linear 🤦🏼‍♂️

I will continue to try to get it to work. I need to learn it at some point, I think. However, I'm inclined to first-publication ideas that do not directly employ NN/learning, as kind of a soft start.

Talked to @kozlovsky again about semantic search based on simple embeddings. We could use the crowdsourced information and metadata to compute embeddings for every torrent and build a distributed (dec.) search algorithm based on vector distance to have a YouTube-like search experience.
I don't know how far the literature is with that but I feel like there are some interesting avenues to explore:

  • The language model used for vectorization: How to distribute it? Or should it come shipped with the software? How to ensure correctness/consistency. I imagine we'd have to reach consensus somehow over the computation of an embedding.
  • How static is the language model really... thinking about decentralized fine-tuning and its challenges
  • With a distributed database of vectors, how do you efficiently search the network by cosine similarity (as opposed to text-matching)? This might as well be trivial; I'm just brainstorming here.

Use this as a basis for semantic search and improve through OL2R in the next step?

This week's ToDos:

  • Finish my mini-project of a NN that is able to predict a sine function
  • Explore collaborative filtering, recommender systems
  • Explore information retrieval on knowledge graphs

@synctext
Copy link
Member Author

synctext commented May 3, 2023

Please chat to all people in the lab, understand their speciality. btw documenting your lesson learned; ML blogs make it look easy, none of them worked.

Please review the Meerkat system Github repo from RedPajama.

Suggested sprint: you successfully encoded sin(). Next, take 4 images, try to embed them in a machine learning model using useful overfitting. Meerkat docs example of embed(). Scientific primitive, unbounded storage of unstructured data for decentralised learning. Key question, how much storage space for 1 million static thumbnails?

import meerkat as mk
mk.search(
    df, 
    query=mk.embed("A photo of a person", engine="clip"),
    by=mk.embed(df["image"], engine="clip")
)

@mg98
Copy link
Contributor

mg98 commented May 8, 2023

Got a sine function prediction working using a NN regression model. It's not much... but feels good to have succeeded at this task. Learned about activation functions, and the challenge of parametrization in ML.

Also did some reading on tagging, recommender systems, and collaborative filtering, which opens another broad area of research, e.g., the issue of trust in collaborative tagging (see research topic 4 in science#42) - which I do find interesting.
Publication idea: Empirical analysis/ measurements of MeritRank deployed on Tribler

This (and perhaps also the next) week, I want to play around with Meerkat, learn about Autoencoder, and see if I can get something up and running, i.e., another ML model. I hope to further evolve my understanding of AI/ML and learn about yet new concepts.

@synctext
Copy link
Member Author

synctext commented May 12, 2023

Rich metadata embedding of "artist passport" Cullah.
Cullah donation wallet 1FfnfPJJ6yTTT9PGZe8TGgfEGQFc9kvQoW
Linkage with strong identity and GoldEuro direct donation by superfans.

scientific venue https://dl.acm.org/conference/facct

@mg98
Copy link
Contributor

mg98 commented May 13, 2023

Update on Autoencoder:

The hardships of the last weeks seem to start paying off. I was able to create some functional autoencoders within just one day.

I trained a model on eight pictures displaying tulips (dimensions: 240x180px), i.e., an input layer of 3x240x180=130k neurons, reduced that to 1000 neurons in a single hidden layer (encoding). If I'm not mistaken, this equates to a data reduction from 130 to 4 KB (the original JPEGs had 50-80 KB).

Example decoded output:
Screenshot 2023-05-09 at 15 28 14

This might not be impressive, and with the right parametrization, we might be able to get more out of it. But for now, I'm just happy that it works in principle.

@mg98
Copy link
Contributor

mg98 commented May 22, 2023

Motivated by my recent success with autoencoders, I spent the last week trying again to get a pairwise LTR model for a sample of test data running. By doing that, I learned a lot more about the details of this approach. However, I had to pause this project because I would like to move this outside of a notebook and run it locally. I'm waiting for my work MacBook for that (my machine has difficulties) - it should arrive next week.

So now I turned to the idea of NN-based file compression which apparently is not only successful with the task of lossless compression but can actually compete with traditional algorithms like GZIP or 7Z (see, e.g., DZIP or its successor DeepZIP).
I find that quite impressive and as I have in the past been working in the realm of data deduplication and also compression a bit, I'm interested to understand how that works, how much of it is theory, and how much of it can actually sustain real-world use cases.
They do employ autoencoders but also recurrent NNs. RNNs are yet new to me and I've come across them before in the context of language models, so I want to educate myself on them finally and thereby further broaden my knowledge of AI/ML.

@synctext
Copy link
Member Author

synctext commented May 31, 2023

Using LLM for ranking. Background https://blog.vespa.ai/improving-text-ranking-with-few-shot-prompting/ and also https://blog.reachsumit.com/posts/2023/03/llm-for-text-ranking/ Did a silly OpenAI experiment, might be useful building block, but small part:

These are examples of queries with sample relevant documents for
each query. The query must be specific and detailed.

Example 1:
document: ubuntu-mate-19.10-desktop-amd64.iso
query: latest Ubuntu image

Example 2:
document: ubuntu-21.10-beta-pack
query: ubuntu 21.10

Example 3:
document: chemistry-3b-002-fall2014-ucberkeley
query: chemistry class Berkeley

Example 4:
document: 14BHD 2020-2021 Informatica
query:

Found this dataset with 7000+ pet images (+ code) for scientific research, so it sort of works 🐶

@mg98
Copy link
Contributor

mg98 commented Jun 2, 2023

So I ended up not learning about RNNs, NN-compression, etc. last week.

Instead, I investigated an idea proposed by @grimadas, which is to leverage ML as a means to classify nodes as Sybil or not-Sybil and use the resulting score as a weight-factor in the reputation algorithms of MeritRank.
The way this would work is by embedding nodes using algorithms like node2vec (excellent tutorial series) such that in vector space they reveal information about structure and connectivity of individual nodes, off which patterns can be derived or learned that could identify Sybil-regions. Further, an ML model would be trained on the simulation of Sybil attacks (cycle, serial, series). The hope is that the model will find more effective strategies to mitigate Sybil attacks than the static heuristics employed in MeritRank.
In a first short literature review, I have only found efforts of ML for Sybil detection/tolerance in the context of online social networks.


Back to LTR

Got my new MacBook yesterday 🔥 so I was able to continue my work on the OL2R project. My goal was to just get anything working, and I specifically sticked with the pairwise LTR approach for this. To this end, I was only following the basic idea, which is to train a model based on query-document-pair triples, and followed my intuition for the rest.

Algorithm

  • I use a truncated version of the CORD-10 dataset, which is a collection of COVID-19 related scientific papers and their embeddings (based on title + abstract and the allenai/specter language model)
  • For every new (i.e. unseen) query, I embed it, and I first consult simple cosine similarity to get a top-5 list of matching documents. I (pairwisely) train a model on this result set on 100 epochs.
  • Let a query's result set be an ordered list of documents (1,2,3,4,5). If a user now selects result 3, we loosely (10 epochs) train the existing model on the desired order (3,1,2,4,5).

Demo
Kapture 2023-06-02 at 13 21 39

Code

Remarks

  • A query's result set is eternally biased and restricted to the top-5 results returned by the initial document matching -> Remedy idea: some measure of exploitation/exploration balancing.
  • A single model will be trained on multiple queries. I haven't thought through the dynamics this would have, if they are useful or disruptive or how they can be exploited.

@synctext
Copy link
Member Author

synctext commented Jun 6, 2023

Sprint: get a solid production-level metadata dataset (Creative Commons, https://github.com/MTG/mtg-jamendo-dataset ?)

@mg98
Copy link
Contributor

mg98 commented Jun 11, 2023

Update:
I have started giving more time again to my first thesis chapter (the one I started writing several months ago, with a colleague). Work title: "A Comprehensive Study of Content-Defined Chunking Algorithms for Data Deduplication". I want tot finish this within the next 1-2 months!

MeritRank+crowdsourcing phd chapter 2023?
Learn-to-rank 2024 thesis chapter?

Love this roadmap!! ❤️

I have started getting my hands on the Tribler code and gain a better understanding of its inner workings. Will try to move forward with the practical preparatory work for the next thesis chapter, such as getting a dataset.

@mg98
Copy link
Contributor

mg98 commented Jul 12, 2023

Over the last month, I was mainly focused on my first thesis chapter. That involved writing, but also running experiments, fixing bugs in our code, and figuring out the best way to present our data.

For example, visualising the chunk size distribution was a challenge because of the amount of experiments and amount of algorithms we evaluate. Furthermore, the variance in behavior made it difficult to fit everything into a single plot while preserving readability.

Apart from this, I was reading some papers, talking to my peers in the lab, trying to understand what they're doing, and also continuing to explore AI/ML, e.g., learning about transformers.

@synctext
Copy link
Member Author

synctext commented Jul 12, 2023

Please write 3 next paper ideas around topic of "Web3 crowdsourcing". 6-page for DICG 4th workshop would be indeed a great learning. One of 3: crowdsourcing of metadata using MeritRank; everybody can tag, describe work done of KnowledgeGraph, desing an Algorithm 1, discover tags, rate them using MeritRank, emulation using Bulat Python magic, epic Sybil attack. take Rohan algorithm, get going in tag context instead of recommendation, improve somewhat.

Great news, hoping chapter to be ready for final reading & submission aug/sep. As a starting point for new lab people a short history and essential reading.

Year Topic Milestone description
2000 vision Open Information Pools was the first paper with the dream of establishing a new Internet. Now grown into the Web3 movement. Dreams of a young man, taking 23 years to realise and still counting. A reputation function calculates a reputation based on several aspects of the user, like the amount of activity of the user, number of retrievals of his submitted information, the number of modi�cations to his submissions, etc.
2006 vision Tribler, initial overview. First deployment of social networking with academically pure self-organisation
2008 vision we reported on BarterCast feedback loop
2023 vision A Deployment-First Methodology to Mechanism Design and Refinement in Distributed Systems
2020 trust TrustChain: A Sybil-resistant scalable blockchain
2018 trust trustchain IETF draft Internet standard https://www.ietf.org/archive/id/draft-pouwelse-trustchain-01.txt
2018 trust we formulated the improved feedback loop
2019 trust Jeremie is now a professor at Delft: RepuCoin: Your Reputation is Your Power
2021 trust Foundations of Peer-to-Peer Reputation
2023 trust Meritrank solves the impossibility result formulated by Prof Parkes of Harvard. A reputation system which might actually induce online trust.
2010 community Private communities help eachother, biggest measurement ever conducted. No tit-for-tat in most of the time with private communities, see Figure 6. image
2011 community competition due to oversupply. Fast download but eternal seeding: The reward and punishment of Sharing Ratio Enforcement
2013 community Systemic Risk and User-Level Performance in Private P2P Communities
2020 DAO Technology Stack for Decentralized Mobile Services (Matt Scala)
2021 DAO Towards a Robot Economy for the Music Industry (Tim Wissel)
2022 DAO Web3: A Decentralized Societal Infrastructure for Identity, Trust, Money, and Data (Joost Bambacht)
2023 DAO Performance analysis of an offline digital Euro prototype (Robbert Koning)
2023 DAO First Deployed DAO with True Full Decentralisation (Brian Planje)
2023 AI MoDeST: Bridging the Gap between Federated and Decentralized Learning with Decentralized Sampling
2023 AI Decentralised recommendations with trust and relevance (Rohan)
2023 AI Towards Sybil Resilience in Decentralized Learning (Thomas)

@synctext
Copy link
Member Author

New ACM Journal on Collective Intelligence seems like a solid venue to target. Your second chapter could then simply be decentralised collective intelligence. Using Tribler to share info, use tagging, and trust. You can re-publish a lot of the Tribler work done by others in the past 18 years and 3 months.
the journal encourages a broad-minded approach to collective performance. We welcome perspectives that emphasize traditional views of intelligence as well as optimality, satisficing, robustness, adaptability, and wisdom. In more technical terms, this includes issues related to collective output quality and assessment, aggregation of information and related topics (e.g., network structure and dynamics, higher-order vs. pairwise interactions, spatial and temporal synchronization, diversity, etc.), accumulation of information by individuals/components, environmental complexity, evolutionary considerations, and design of systems and platforms fostering collective intelligence.

@synctext
Copy link
Member Author

@mg98
Copy link
Contributor

mg98 commented Aug 6, 2023

To give a little update...

  • I skim-read through the complete list of papers and made my notes for each one.
  • My thesis chapter has seen great progress but is currently blocked again by my co-author who is on vacation now.
  • I have consulted with Bulat about your proposed paper idea (DICG) to understand and plan the required steps. I have yet to get a better understanding of the field to come up with my own ideas for topics.

@mg98
Copy link
Contributor

mg98 commented Aug 14, 2023

I'm focusing on the DICG'23 now. @grimadas

MovieLens Dataset

I was able to find a really nice dataset. MovieLens is a non-commercial project run by people from U. Minnesota since 1995. It's a database of movies that users are allowed to rate and tag. Any user can add new tags, and existing tags can be up- and downvoted in a way (see screenshot).

Screenshot 2023-08-14 at 19 09 21

Tags also have the attribute of being positive, neutral, or negative. I am not sure how complete their dataset is about that, but they are responsive to my emails and seem highly cooperative with the provision of data.

We can use this dataset to get an idea of the quantity and quality when it comes to crowdsourcing of tags, and base our simulations on it.

Quick plots...
Screenshot 2023-08-09 at 18 04 42Screenshot 2023-08-14 at 20 30 26

Idea

Perhaps, for this workshop, I could come up with some subjective tag scoring algorithm, a bit related to the "Justin Bieber is gay" problem. Playing with the idea that for a group of similar users, a tag might be agreed upon, but for another group of users the same might not, etc.
Regarding the beforementioned question, there might be a community of users (with similar taste) for whom he is, and another community for whom he isn't. Therefore, introducing the notion of "subjective reality".
What indicates user affinity? It could be that they are interested in similar content, i.e., they interacted (tagged or rated) with similar movies.

Approach

  • Propose some algorithm that for a tuple (peer, resource, tag) can calculate a score in [0,1].
  • Introduce the notion of user affinity by some metric of has-interacted-with-similar-movies.
  • Make plots and observations... Perhaps we can visualize the 'bubbles' of communities that are created by this metric of user affinity and how they support different tags. It will depend on the quality of the dataset though, I'm yet skeptical.
  • Simulate my subjective tag scoring algorithm and observe its effects. Given the tags that the user has agreed and disagreed on in the dataset, we could be able to introduce a metric of success for this algorithm. My concern here again is potentially the lack of data. We rely on the existence of a big enough set of highly active users whose contributions are distributed on not too many movies.

Will further investigate this idea and the dataset and make updates here. Comments welcome.

@devos50
Copy link
Contributor

devos50 commented Aug 15, 2023

Just noticed this line of work, very interesting! I worked on something similar (trust, tags + MovieLens dataset) more than a year ago, see this issue (note that this is a private repo with many of our research ideas so you might have to request access). The overall goal of that issue was to work on the foundation of tag-centric crowdsourcing in Tribler.

I tried out a few algorithms/approaches and I remember I identified some shortcomings of Credence, which is related to what you're trying to achieve. but as reputation/trust was not really my domain, I decided to focus on the fundamental data structures instead (using Skip Graphs and Knowledge Graphs for Web3 data management). The paper with these ideas is currently under review. Nonetheless, extending that work with a trust layer would be a valuable contribution!

@mg98
Copy link
Contributor

mg98 commented Aug 28, 2023

Hi Martijn :) thanks for your input!

I was knocked out by COVID over the last two weeks, and still am a bit, but here is the continuation of what I was trying to do:

I have calculated user similarity based on the Pearson correlation of common sets of rated movies (as suggested here), and based on that, subjective tags on movies (indeed similar to Credence in that I weigh based on peer correlation). I based this solely on the interactions of the 200 most active users (perf reasons).

Example of a sample of users and their subjective tags on the movie "The Shawshank Redemption"
              tag     score
126      friendship  1.465223
260          prison  1.367187
8    Morgan Freeman  0.966742
275            rape  0.924138
151            hope  0.854776
          tag     score
126  friendship  1.109590
260      prison  1.103990
275        rape  0.703847
151        hope  0.673232
285  redemption  0.673232
            tag     score
247        prison  1.158236
121    friendship  0.947820
293       revenge  0.735881
333       suicide  0.685857
134  great ending  0.578089

From there on, I tried to find extreme results, i.e., movie tags for users of "opposite" groups. To this end, I looked up controversial movies and their tags for users with minimum/negative correlation, hoping for something like a clear political or a gender split.

And it wasn't easy, perhaps due to the lack of data. But I still found an interesting disparity for Disney's Star Wars remake.

While one user has funny, good action, and great action among his top tags,
another user has Bad jokes, Weird Pacing, boring long, and Script Recycle among its top tags, and further, feminism and social justice.
Most of these tags exist with negative scores in the list of tags of the respective other user.

Full list of tags for two negatively correlated users on "Star Wars: The Last Jedi"
                 tag     score
29         plot twists  0.181055
4     Benicio del Toro  0.181055
1                 BB-8  0.181055
16         Space opera  0.181055
7                Funny  0.181055
8          Good Action  0.181055
9     Gorgeous visuals  0.181055
10        Great action  0.181055
12       John Williams  0.181055
17           Star Wars  0.126848
13         Mark Hamill  0.126848
18  Strong female lead  0.126848
14        Rian Johnson  0.126848
0          Adam Driver  0.099062
6         Daisy Ridley  0.099062
3    Bechdel Test:Pass  0.048224
19        Weird Pacing  0.048224
20              boring  0.048224
34        stormtrooper  0.036496
28     part of trilogy  0.036496
33         space opera  0.036496
21              bunker  0.036496
22              defeat  0.036496
23             failure  0.036496
32        space battle  0.036496
25        good vs evil  0.036496
30              sequel  0.036496
27  military operation  0.036496
24            feminism  0.020438
31      social justice -0.027786
2            Bad jokes -0.033770
15      Script Recycle -0.054207
5        Carrie Fisher -0.054207
11         John Boyega -0.054207
26                long -0.170404
                  tag     score
1            Bad jokes  0.854447
16            feminism  0.730180
0          Adam Driver  0.440624
4         Daisy Ridley  0.440624
2    Bechdel Test:Pass  0.413822
11        Weird Pacing  0.413822
12              boring  0.413822
15             failure  0.410806
24         space opera  0.410806
23        space battle  0.410806
21              sequel  0.410806
20     part of trilogy  0.410806
19  military operation  0.410806
17        good vs evil  0.410806
13              bunker  0.410806
14              defeat  0.410806
25        stormtrooper  0.410806
22      social justice  0.316358
10  Strong female lead  0.124267
9            Star Wars  0.124267
8       Script Recycle  0.124267
7         Rian Johnson  0.124267
6          Mark Hamill  0.124267
5          John Boyega  0.124267
3        Carrie Fisher  0.124267
18                long  0.101508 

That was fun to explore but it still lacks a scientific methodology in order to really evaluate the effectiveness of the subjective tags I computed. Previously, I proposed that

Given the tags that the user has agreed and disagreed on in the dataset, we could be able to introduce a metric of success for this algorithm.

Maybe that gives us something. Maybe for all tags that have been up- and down-voted, I can compare the subjective with the objective reality and derive a success metric. And this would allow me to experiment with more sophisticated scoring algorithms and see their effect on this metric.

The paper with these ideas is currently under review. Nonetheless, extending that work with a trust layer would be a valuable contribution!

Good stuff. I don't know if trust should be my scope either. I'll talk to Johan today, will know more then.


Status update on

Please write 3 next paper ideas around topic of "Web3 crowdsourcing".

After almost half a year, I still don't have a grasp of the field enough to come up with own ideas for publications.

@synctext
Copy link
Member Author

synctext commented Aug 29, 2023

No worries about your progress in 6 months of a 48 months phd. Getting into the field of distributed system and doing something novel is hard. Having a draft publication within the first 12 months is already a solid achievement. Goal: April 2024 == 1 thesis chapter under review + 1 finished draft thesis chapter. Non-linear productivity 📈

Task for September 2023: come up with ideas for a scientific paper and select one (or get inspiration)

SwarmLLM: collective LLM intelligence (with new AI phd expert??)

We present the first proof-of-principle of collective intelligence for transformers. Intelligence emerges from the interaction between numerous elements [REFS]. We use a transformer as the basic building block for a interconnected network of connected unique transformers. Instead of the classical transformer approach with billions of parameters, we connect thousands of specialised transformers into a network. This is a generalisation of the mixture of experts approach with the highly desired new property of unbounded scalability. There is a cost to pay in our approach. In a typical divide and conquer style, the challenge of finding the correct expert becomes harder.
Go beyond deepswarm, a system with outdated MNIST evaluation from 2019 using pheromone update rules.

LLM as a key/value store

key: any youtube URL in Youtube-8M dataset.
value: the preview thumbnail generated {re-usage of your autoencoder idea??!!}
With sharding and multiple LLMs per node a unique datase can be spread in a ring topology, ordered by the keyspace.
Semantic clustering or not?

Rich metadata inside an LLM

Tulip picture embedding in generic form.
Rich metadata embedding of "artist passport" Cullah.
Cullah donation wallet 1FfnfPJJ6yTTT9PGZe8TGgfEGQFc9kvQoW
Linkage with strong identity and GoldEuro direct donation by superfans.

Tribler: a public semantic search engine

We shamelessly claim to have a proof-of-principle for public Internet infrastructure after 20 years of effort. We argue that critical societal infrastructure should be developed as a non-profit endeavour. Similar to Bittorrent and Bitcoin we present a self-organising system for semantic search. Our work is based on learn-to-rank and clicklog gossip with privacy-enhancing technology using a Tor-derived protocol.

Web3Search: Online Pairwise Learning to Rank by Divide-and-Conquer with full decentralisation

Embedding nodes using algorithms like node2vec. Embedding of any item using 50-ish dimensional vector.
Fully decentralise and make trustworthy, this older work from 2021: https://arxiv.org/abs/2103.00368
Dataset: Covid papers, Youtube-8M, creative commons magnet scrapes??

Foundations of Trust and Tags.

Use the @grimadas work with theoretical grounding: emergence of trust networks with repeated successful interactions. Use tags based on crowdsourcing if you have successfully collaborated.
"The extended mind", classic reading from 1998.

Next steps: learn-by-doing methodology. Work for 2 weeks further on the Tulip stuff. Autoencoder of 1000-ish Youtube-8M thumbnails. Work for 2 weeks on another of above brainfarts. Commit to best chapter idea.

@mg98
Copy link
Contributor

mg98 commented Aug 31, 2023

Seeing how far I can get autoencoding YouTube thumbnails. Time for some quick coding.

Using YouTube's API I got the thumbnails of 577 search results with "starship" as the query.
I'm using the same network parameters as I used for the tulips but with fewer input neurons (the thumbnails are 4x smaller in size).
However, it's not like YouTube thumbnails are good representations of the queried terms, nor is there necessarily a high similarity of visual features throughout the thumbnails in the search results, as you can easily verify: https://www.youtube.com/results?search_query=starship, or browse the YT 8M Dataset Explorer.

Note

Using YouTube's search API instead of its 8M dataset (can't run that on my machine!) is different in that I collect the thumbnails of videos which match the search query, and in the 8M dataset they sort of match the query (selected set of visual entities) with what they actually found displayed in the video.

I still went with it, trained the network on 576 thumbnails, and then ran the 577th search result's thumbnail through the autoencoder.
InputInput

Initially confused about how well that went... Because, as I implied, I don't think the fed set of images works much better than any arbitrary collection of images. It's funny though because technically it's a data reduction from 32K worth of information down to 4K (the size of the model), and 4K is also exactly what JPEG compression gets for the original image. So I probably just built myself a generic image compressor. (error, see my next comment)

What might do is the labeling we get on frame-level (or ~1-second-interval video segments). We have that in the 8M dataset. Getting an actual image entails downloading the original YouTube video and then extracting the corresponding frame. That's costly but doable on a small to mid scale.


We have been thinking about doing text-to-image basically, using auto-encoders? I think that was the plan...
The 8M dataset contains embeddings on video segments/frames. It would be cool to be able to embed a custom query like "starship" and then find video segments near it. But it doesn't work that easily. The 8M model is not trained on text. We only get fixed labels, e.g., we have "guitar", "toy", "minecraft", a few thousands of those. "starship" is not part of this taxonomy. Perhaps we can use a separate model to link semantics between "starship" and the labels in the 8M dataset and thereby make our way...
Tbc

@synctext
Copy link
Member Author

wow, as impressive as I hoped it would be!! 4k!
Just drop the YouTube 8M for now and focus on the 1000 Starship thumbnails. How good can you autoencode them all?
What tradeoffs? What does 5 days of M2 GPU crunching yields?

@mg98
Copy link
Contributor

mg98 commented Aug 31, 2023

There was an error in my code, and a bit in the approach. What I did was training only on 50 thumbnails, and then use a thumbnail that was part of the training data for testing. I updated my last comment; the result is very different.

Click here to see previous (misleading) result

InputInput

What does 5 days of M2 GPU crunching yields?

Actually, PyTorch does not support CUDA (GPU acceleration) on Mac :( Google Colab with GPU runs faster for me.

@synctext
Copy link
Member Author

synctext commented Jan 30, 2024

  • Great results!!
  • It seems likely you can write a paper with the 'queries is all you need' idea
  • Lets not work on 3 papers at the same time. Submit something before diving further.
  • It covers 1.4 million of the TREC DL documents, providing 18 million connections to 10 million distinct queries. One ORCAS use case is Web mining, to find clusters of related queries and/or related documents. These can be mined for synonyms, used for expanding and understanding the vocabulary of queries and documents. The 10 million queries could be used in studies of query autocompletion., from https://microsoft.github.io/msmarco/ORCAS.html
  • Expand that golden figure! Start writing the first experimentation section; 1-2 pages. No intro, No problem, no related work.
  • Exaggerate! Artificially pick the most extreme example which produces the best results. Select from the 10 million queries the most clustered ones? Select the document with the most ambiguous queries? Select the documents with the most similar queries?
  • Storyline idea: In this experimental section we first present our motivation example and then elaborate with detailed performance analysis. The first experiment is chosen specifically for simplicity and dramatic effectiveness.
  • 1 thesis chapter on block storage, 1 thesis chapter on 'queries all you need'
    • Active learning (crowdsourcing, knowledge graph)?
    • Continuous learning?

@mg98
Copy link
Contributor

mg98 commented Mar 7, 2024

It's time for another update. I have been conducting more and more experiments, as I was trying to better understand what's going on.

Here is one very interesting finding:

💡 More popular documents have a more diverse semantic range of queries than more niche documents.

⚠️ Caveat: "More popular" just means those document have a larger number of distinct associated queries.

For example, people who looked for Gmail got successful querying "google login online" but also "create a new account".

We can also visualize that by clustering the embeddings for queries of low popularity (40-50 queries), and queries of high popularity (1000+ queries) on a 2D semantic space. (Dots represent queries, color-coded by their associated document)

download-3

As we can see, the queries of popular documents are semantically more dispersed, which makes them more difficult to group them together just by looking at their position. In other words, if you let k-means do the clustering (intuitively what our model would do), the mismatch rate would be higher with the high-pop docs than with the low-pop docs.

There are also metrics to quantify that. In the following figure, I have plotted the Adjusted Rand Index (ARI) of the clustering and increasing document popularity.

ARI values range from -1 to 1. A score of 1 indicates a perfect match between the clustering results and the ground truth, while a score of 0 indicates random clustering, and a negative score suggests less agreement than expected by chance.

I have plotted this for 20 docs and for 40 docs, to show that of course the more documents share the same semantic space, the more crowded the queries get, and the more unreliable the clustering.

Finally, this is of course reflected in the accuracy the model can attain, as shows the following example (here with 100 docs trained on 10 queries).


💡 Second finding is that the model often hallucinates, and beam search sometimes duplicates results.

I call a result a hallucination (or invalid) if it outputs a string that was not part of the outputs it got trained on.

This means that if you have a system where you are aware of the existing documents, you can instead take the first valid docid output by beam search, and therefore bump the accuracy.

On the lowest range of our experiments (100 docs and 1 query fed) this made a difference of +3%. However, on the upper end it is rather negligible. The following shows an excerpt of the results.

Documents Queries acc@top1valid acc@top1 acc@top3 acc@top5 inv@top1 inv@top3 inv@top5 dup@top5
100 1 0.4265 0.3955 0.457 0.483 0.2185 0.534833 0.6417 0.0006
100 5 0.824 0.8235 0.898 0.9365 0.0015 0.2255 0.307 0.0185
100 10 0.878 0.8775 1 1 0.002 0.227833 0.3431 0.1753
100 20 0.932 0.932 1 1 0.0015 0.327833 0.4391 0.0628
1000 1 0.20515 0.157 0.198 0.2156 0.58495 0.757117 0.81045 0
1000 5 0.68995 0.68925 0.77 0.80935 0.006 0.16825 0.23343 0.01082
1000 10 0.79405 0.79385 0.96025 1 0.0014 0.191117 0.2658 0.07091
1000 20 0.832 0.8314 0.93815 1 0.0022 0.217933 0.28892 0.04126

@mg98
Copy link
Contributor

mg98 commented Apr 8, 2024

I have been digging into some papers in the context of the upcoming Queries-Is-All-You-Need chapter and potential future work. Dumping my learnings here.

Neural Corpus Indexer (NCI) [PDF]

Abstract: NCI is currently probably the best-performing version of DSI (after GenRet 😉 see below). NCI and others, however, gain most of their extra performance from leveraging document contents.

Most of these proposed systems, like also DSI-QG and SEAL, leverage knowledge over the document contents. For example, they will artificially generate queries to train on, or create semantic docids based on the contents (e.g., SEAL does ngrams for indices/docids).
So it's important to keep those technologies out of scope - we are interested in content-oblivious search.
NCI is interesting because they--besides doing a lot of other things--offer two technological improvements that do not rely on document knowledge:

  • Prefix-aware weight-adaptive (PAWA) decoder
  • Consistency-based regularization

Combined, these changes increased the Recall@1 from 62.57 to 65.86.

DSI++ [PDF]

Abstract: This is an update by the original authors of DSI to equip DSI with eternal learning (aka continuous or lifelong learning).

This is how they do it:

  • They optimize for flatter loss basins. There's this thing called Sharpness Aware Minimization (SAM), which seeks out flatter areas within the loss landscape. My intuition around this is that you have a wider area of tolerance when new things are learned and altered parameters shift the loss.
  • Then, they employ a second model (generative memory) that keeps generating pseudo-queries for previously learned stuff. This is then fed into the DSI for rehearsal.

GenRet [PDF]

Abstract: Using an autoencoder to learn to tokenize documents (my idea is to use the mean of the collection of associated queries instead) into short semantic docids.

The accuracy drops as soon as you replace ORCAS' docids D1234839 to magnet links 0xabcde...40 chars. Intuitively, the more tokens the model needs to generate, the more likely it will make a "typo".
Therefore, I asked myself how can we represent N documents using the least number of tokens. I naively tried to create docids myself where I calculated the shortest length of characters or tokens with which I could still create N unique combinations. But turns out, it either performs worse or equal to ORCAS docids. It apparently is not that simple because of the semantic meaning that is carried with every token.

So instead of writing a docid tokenization function myself, I would like to learn a function that does the docid tokenization. This is what the authors of GenRet did, and thereby (and only thereby) outperformed NCI (which got most of its performance from extracting document knowledge -- we could maybe do GenRet's technique using queries only).

I roughly understood the theory of how they were doing it, but luckily they are also open source (repo), I will try to get it to work and run experiments.


Other Interesting Papers for Future Work

  • DynamicRetriever team combined DSI with actual Learn-to-Rank [PDF]
  • Same team was also working on personalized LLM-retrieval, idk how it works [PDF]
  • DSI team published TIGER: Recommender system using LLM with I/O of form (doc1, doc2, doc3) -> (doc4) [PDF]

@synctext
Copy link
Member Author

synctext commented Apr 10, 2024

update : (more paper ideas then finished chapter, simply documenting)
A Survey of Hallucination in Large Foundation Models. Would it be possible to filter hallucination of our magnet link generative AI (Queries Is All You Need)? The popularity community is a gossip mechanism to update statistics of millions of torrents. We should be able to track dead torrents, popular torrents, and hallucination torrents.

@mg98
Copy link
Contributor

mg98 commented Apr 12, 2024

"learn to tokenize documents"

I got the script to work (GenRet). However, I was rethinking this idea and I don't see how we could sell it. Learning semantic docids from the queries themselves obviously requires you to have the queries beforehand, and even then it implies some fluidity (cannot just "improve" the ID in the continuum).
If anything, DSI's hierarchical clustering is much better suited for semantic docids, because here at least the semantic space on which it clusters it is fixed. GenRet really goes one step further and limits the space to what the document space encompasses. [my understanding]

I'm dropping this!

roadmap on life-long decentralised learning [...] what is the first realistic next step?

Yeah as you listed: overfitting, pollution, spam, those are also things that come to my mind. While there are some ideas how it could work conceptually (e.g., DSI++, IncDSI), the datasets they use to validate them are a bit weak (in those papers, NQ and MS MARCO). For a real evaluation, we (1) need real search queries, including spam, but also (2) we should care about the chronological order that the queries come in, and that the model learns on.

Waiting for the Tribler Clicklog.

Would it be possible to filter hallucination of our magnet link generative AI (Queries Is All You Need)?

Not in the way that is described in this body of research (referring to your survey link), I think.
What I have already done in my experiments is to restrict the vocabulary of the generator and control the length. This way you can tune it a little bit to the pattern of a docid.

What we can do in the next step is to assume knowledge of, let's say, healthy torrents. Using this knowledge, the model will be configured to predict the most likely token, with which the resulting output continues to match a prefix found within the set of healthy torrents.

Will do!

@mg98
Copy link
Contributor

mg98 commented May 7, 2024

Representation of Targets Matter

In our last paper, we saw significant differences in performance when representing our targets as ORCAS docids (e.g., D1234567) vs. as magnet links (40 character hex string). The model would generally have a harder time predicting magnet links. We blamed this on their length; more tokens to generate, more chances to trip along the way.

When thinking about how to optimize the performance of our model, I therefore thought the number of tokens on a docid should be minimized. Why not use the entire ASCII space for example? Or hell, the T5-small has 32k tokens, why not encode docids as, for instance, "car 2001": two tokens, 1 billion possible combinations.

It turns out this confuses the model more than it helps 😅. This beeeeeegs the question....

🤔 Using an LLM to predict arbitrary identifiers, what kind of identifiers come natural to it?

Is it a question of length? Or consistency? Or the employed vocabulary? What tokens should you use? How many?

I ran a lot of experiments to get closer to an answer about all these things.

In order to enhance the performance of our model, I initially thought that the number of tokens used to represent a docid should be minimized. My rationale was _less tokens, less chances to mispredict. And while that might be true, or maybe only true to some extent, it definitely seems to be case that the employed vocabulary matters too!

I have been experimenting with different representations (or rather encodings) of the targets (i.e., the docids) -- and, spoiler, the results are actually quite impressive.

Here is exactly what I did

  1. I transform all ORCAS docids to their their SHA1 hash. This hash has 20 bytes. Because my experiment only samples 100 docids, I only took a slice of 2 bytes from that hash.
  2. I encode these 2-byte-hash-slice as, for instance, hexadecimal strings. This yields docids in the form "fc48", "7f82", ....
  3. I use the 1-query-on-100-docids experiment as a baseline to measure its top1 accuracy.

I repeat this experiment with different encodings. A full list, including some result metrics, is shown below.
(Vocab Size: Number of distinct tokens to encode 100 docids; Vocab. Sim: Mean pairwise euclidian distance between all tokens in vocabulary)

Encoding Description Example Accuracy Mean Token Length Vocab. Size Vocab. Sim.
original Original ORCAS docid D972207 0.3360 4.23 146 408
dec Integer 9958 0.3345 2.73 131 416
dec_pad Zero-padded integer 09958, 14382 0.4020 2.83 136 427
dec_x Integer with spaced digits 9 9 5 8 0.2175 5.18 11 312
dec_x_pad Zero-padded integer with spaces digits 0 9 9 5 8 0.2545 5.48 11 314
bin Bitstring 111000000100 0.2700 5.59 21 417
bin_pad Zero-padded bitstring 0000111000000100 0.3015 6.09 24 427
hex Hexadecimal encoding 0b84 0.3600 3.62 93 439
base64 Base64 encoding 07s= 0.3400 4.25 109 435
Boxplot of token lengths with each encoding

output

🚀 In this experiment, we made a 7% performance increase over our original results just by choosing a different encoding for the docids

It seems to have an easier time with numbers. Maybe it is because there exist many tokens for compounded digits (69, 420, 2001, 1944), thus reducing in less tokens needed to represent a docid.
There is an incredible 7% difference between dec and dec_pad, showing again how consistency is key! As I have already learned when trying to mix magnet links with youtube URLs, the model really suffers with inconsistent formats. And if you don't believe that the consistent char length is reflected in consistent token length, check the boxplot! It's actually kind of crazy.

Another theory I have is that having predicted a number token, based on this context it is more likely to predict another number token, and that this might help performance a little.

It is perhaps also interesting to acknowledge that number tokens are semantically very similar to each other. That goes to say the tokens for 56 and 55, or even 21, are semantically very close. Indeed, if you do k-means clustering on all token embeddings and then look for the most dense clusters, they will contains years (2012, 2013, 2014) or other numbers. 💡

We might already be very close to what the perfect (or perfectly-enough) representation is. But it might be interesting, not just for this application, but also for the broader ML community, to investigate what representations an LLM works best with. @pneague and me were thinking of using ML (genetic algorithms in particular) to learn an optimal vocabulary for representing arbitrary string identifiers. 🌈

Edit: Looking at the results again, it might just be that the model favors a low but consistent token length. But more experiments need to be conducted.

@mg98
Copy link
Contributor

mg98 commented May 29, 2024

Encoding of Targets

Threw away that idea of ML/ genetic algorithms to determining the best tokenization. It's not that complicated after all!

As the prior analysis already indicated (cf. boxplot in prev. comment), the LLM works best when the targets are represented in an encoding that

  1. produces a consistent number of tokens
  2. and uses few tokens

The tokenization of strings like "a3bf01..." is unreliable in the number of tokens it produces.
So, instead, I extended the tokenizer with what I playfully call poop-tokens, and I create one for every possible byte value.

CUSTOM_TOKENS = [f'💩{i}' for i in range(256)]

This makes encoding later easy as it is just a mapping of CUSTOM_TOKENS[byte value].
Finally, I get targets that look like "💩16💩201💩123💩4...", and the tokenizer has a very predictable way of operating; meaning, it will always create a consistent number of tokens.

This approach yielded the best accuracy we ever measured. For the experiment of which I listed results in the table above, this encoding yielded 42% (if I remember correctly).

Further experiments could alter the initial embedding of the poop-tokens (currently random), or compound bytes such that the length of tokens per docid could be further reduced. In this case, halved:

CUSTOM_TOKENS_2 = [f'💩{i}' for i in range(256**2)]

Practical Implications of Our Results

As we have uncovered in our last meeting, we are actually only determining the accuracy on the next unseen query, whereas most queries are likely to have been used before. In other words, we don't even know how much unseen queries are a problem, and what we win in real-world scenarios.

Therefore, I would like to include another experiment that suggests the real-world implications of our results. Queries targeting a document follow a power-law distribution (there are dozens of papers on that). What I would like to do is assume some probability distribution and map it to our dataset.

That is, instead of doing a distinct train/val/test split of the queries, I want for each set of length n to draw n queries from this probability distribution.

This approach is still flawed, however, as we ignore the degree of similarity that is correlated with the frequency of query-usage. For instance, the 2nd most used query could just be the 1st query with a typo.

@synctext
Copy link
Member Author

synctext commented May 29, 2024

  • You are now performing at phd level 🦄 🦄 🦄
  • Awesome progress, great shape after 12 months of work.
  • Above issue updates are at a deeper level then usual. instead of black box approach, you now dived into complex details 🎉
  • Status after 12 months:
    • 1 very thorough paper in chunking and deduplication
    • Queries is all you need 2-page writeup
    • Idea for 3rd thesis chapter: decentralised pairwise online learn to rank
  • Upcoming sprint: paperwork for http://www.graduateschool.tudelft.nl/ (reviewers from https://www.tudelft.nl/ai/design-at-scale-lab ?)

@mg98
Copy link
Contributor

mg98 commented Jun 17, 2024

In the last three weeks I was occupied with the re-doing of the stochastic calculations of the chunking algorithm AE, and its "cousin" RAM. The purpose of this analysis is deriving a formula in order to understand the relationship between their parameter $h$ and the expected mean chunk size $\mu$. This enables us to conduct a comparative study between all algorithms. ~added 3 page math appendix A

We're approaching 40 pages now, and thinking how to sell this work with @grimadas; Ideas of breaking this work into two papers: one survey/SoK theoretical paper, and one about the empirical study. Possible target: JSys 1st August

Draft thesis title for forms: Decentralized Machine Learning Systems for Information Retrieval

@synctext synctext changed the title phd placeholder: IPFS crawl experience, metadata, and semantic gap awareness phd placeholder: "Decentralized Machine Learning Systems for Information Retrieval" Jul 2, 2024
@mg98
Copy link
Contributor

mg98 commented Sep 2, 2024

Progress after this summer:

The problem is not that we failed to decentralise BM25 for 30 years. The metadata is simply not there to implement anything. So we need trustworthy metadata before we're able to realise any search.

Idea

Use LTR as a means to collect metadata from implicit cues (solves incentive problem). We propose two strategies:

  1. Local-only approach: Peer trains local model on locally generated training data only (pro: solves trust issue, personalization, con: scarce training data, cannot generalize well to unseen queries)
  2. P2P multitask learning approach: Model split in two parts: globally shared bottom (facilitated by p2p gossip) and personal layers (pro: personalization + global knowledge, con: trust is out of scope, but problem maybe a bit mitigated by still having local-only layers)

ltr draft.pdf

@synctext
Copy link
Member Author

synctext commented Sep 2, 2024

Comparing money and search...:There is a lack of incentives for transaction processing with double spending prevention in Peer-to-peer payments. Then Bitcoin came along, it introduced mining to make it costly to participate in a lottery. A single person selected at random from the participants is trusted to execute money transfers without fraud. Vulnerable to 51% attack, requires an honest majority. We don't have that in the metadata and decentral search world. Making a Decentral Google is harder then printing your own money 🤑 .

ToDo: polished 4 pages of article text. Only Journal submission-ready sections please. No experimental setup. No experiment description. No early results section. left for future sprints.

@mg98
Copy link
Contributor

mg98 commented Sep 16, 2024

First principles approach: I took some steps back to learn about the evolution of information retrieval techniques and what place learning-to-rank has in there. To that end, I found the following resources incredibly valuable:

Another interesting resource I want to share (did not read it very seriously yet, keep it for future work)

Update of LTR chapter draft (improved intro and worked on background/related work):
ltr_chapter_draft.pdf

@synctext
Copy link
Member Author

synctext commented Sep 27, 2024

Bit shocking to read the available papers. The state-of-the-art in decentral search and learn-to-rank makes your cry 😭 Both CASearch from 2020 and MAAY from 2006 are evidence that the Stanford-class scientists do not touch a scientific topic without any startup potential and high engineering cost. Anything decentralised is easy 100x more effort or 99% of central approaches don't work and you need to invent that 1%. Peer-to-peer is so 2001, everybody left the field.

Background reading on paper publishing versus projects in AI. Thinking two steps ahead is rather interesting advice. Top-level analysis on decentral search from 2024. Discussing the Go-NoGo moment for the Learn-to-Rank paper. Is there a need for a distraction 🤡

Please note that a phd thesis is highly specialised. It can be entirely about decentralised Learn-to-rank 💥 🤔 So, strategic thinking is indeed: which peers can we trust with privacy-preserving ClickLog info? Have a noble science goal in mind, such as "scalable models of intelligence". For great storyline we need stuff like superhuman performance of AI (their code)

Road to publishable results!
Use running code from allrank project on Github possibly.
No pause of all writing, but interleave sprints with dedicated experimental focus with a few writing sprints. Ambition level: Learn-to-rank submit by Feb 2025. {end of 2024 is much better for phd progress}
General storyline: we have this dataset, apply G-Rank+LambdaRank+ApproxNDCG+NeuralNDCG, do a novel thing, beat all other curves/algorithms. So Algorithmic work. Or valuable experimental work: move decentralised learning from an academic dream to Tribler reality. See decentral Crypto related work: NebulasRank Would need to do AI msc first, before improving NeuralNDCG. Same approach as your De-DSI work would be De-NeuralNDCG 🦄 🦄 🦄

Future paper ideas:

  • Dataset paper (performance or baseline accuracy) with peer latency, churn, privacy preserving (Our Tor fork?) User1 queries, clicks, etc. of 10k users?
  • Spam focus for learn-to-rank or search
  • All experimental work around decentral search is limited to "syntax search". Semantic decentral search is never deployed. We could do a great table of that (e.g. IPFS search, Freenet2, etc.). Search phase needs to be semantic, great other thesis chapter. This comes before the learn-to-rank (e.g. re-rank), so great for thesis coherence.

Sprint focus: De-NeuralNDCG

  • first duplicate existing work. Later sprints do your own thing. Get Allrank working with Web10k on laptop and/or Das6 The dimensionality of initial fully-connected layer was set to 96 for models trained on Web30K and 128 for models trained on Istella. Can AllRank be run on the DAS6 even with Web30k and Istella?
  • Find another Github project and identify the best ranking framework on Github
  • Comparing to central solutions ⚡ Comparing to G-Rank or MAAY ⚡ MarcelRank does best in table with 6 others ⚡ You are the baseline???

⚠️ No need for scrape, just re-usage of several years of work {OOPS, website went offline; archive.org backup}

@mg98
Copy link
Contributor

mg98 commented Oct 2, 2024

✨ Grand scheme thinking late at night (loose understanding of literature, take everything with heaps of salt) ✨

Inspired by my latest reads and chats in the lab, there is a vision for decentralized IR that is growing increasingly stronger in me.

The Future of Decentralized IR
... is the present of centralized IR. It is neural information retrieval techniques, such as LTR, embeddings, LLMs. Essentially, it's the global brain.

State of the Global Brain
What goes under "gossip learning" is a weird fantasy project to me. I think it's federated learning but without a central server, literally only cutting out the central aggregator node. It's not meant to be decentral in the sense that we study. The trust model isn't just missing; it's inconceivable.
DeMoE fundamentally suffers from the same problem, in addition to new problems created by the DHT architecture.

My Vision
I don't think that either gossip learning or MoE are natural! It's not how real p2p systems, and by that I mean the human2human systems that pushed our species to glory, work out information management and retrieval.
We don't learn everything from everyone and become almighty together. We also don't designate others and ourselves to become experts on specific areas of interest. No! We just live our lives, we make friends, we make experiences. And that's how I think we should approach P2P.

My Vision, but being concrete
Users should have their own brain (= model), learn from their own interactions, learn from their own files, which they will be incentivized to acquire and to study (@pneague's AI file analysis idea here) because they were intrinsically motivated to get those in the beginning. They should never let their model be poisoned by other peers. Humans don't merge brain cells with each other. We learn from interactions with our peers. We learn from future experiences whether we should trust that peer again. We know how to find information outside of our domain by asking a friend who is "semantically close to our query" or where at least their neighborhood is.

I think this vision opens the door to so many research papers, e.g.,

  • Local metadata enrichment through multimedia AI file analysis - Petrus 1st chapter
  • Web-of-trust inspired trust model based on above idea - My 3rd chapter
  • Design of a query routing protocol, also based on Petru's 1st chapter (see the Graph Diffusion Scheme paper)
  • and many more

@mg98
Copy link
Contributor

mg98 commented Oct 18, 2024

Did my coding homework for the upcoming LTR paper. That includes

  • implementing the dataset crawler
  • replicating baseline algorithms where possible
    • DINX: ranks by download count -> we rank by number of seeders
    • CASearch: ranks files by "number of nodes that have it" and latency -> we cannot simulate latency, so this will same as DINX for us
    • G-Rank: collaborative filtering (number of clicks -> number of seeders; observed clicks/queries by node A -> we just use all the observed clicks/queries)
    • MAAY: complex algo, similar problems to G-Rank (no node subjectivity in observed metadata)
    • Panaché: ranking by keyword hit count
    • Tribler: as original ranking in dataset

We plan to implement LTR using the allRank framework, which is based on Context-Aware Learning to Rank with Self-Attention (2020)

Current concerns:

  • No clean replication of baselines algorithms due to varying network topology or p2p architecture assumptions
  • No clean dataset as we have to somehow deduct the bias caused by the original Tribler ranking (will probably refer to some formula from the literature, but still it feels a bit waggy)
  • Don't see the systems-angle scientific contribution of this paper yet. So far, it's been an engineering exercise.

@synctext
Copy link
Member Author

synctext commented Oct 18, 2024

  • Able to write Tribler code ❤️
  • First experience with AllRank from Polish paper using transformer in 2020 for learning-to-rank
  • Local metadata enrichment through multimedia AI file analysis - Petrus 1st chapter
  • Web-of-trust inspired trust model based on above idea - My 3rd chapter
  • Design of a query routing protocol, also based on Petru's 1st chapter
  • Popularity community, base line, relevance ranking
  • Scalable models of intelligence
    • human2human systems that pushed our species to glory 🎉 🎖️ 🍖
    • New security model: Mixture-of-Friends
    • Unify the knowledge graph with the social graph
    • use-case: search and rank 🔍
    • Storyline: asking a friend who is "semantically close to our query"
      • semantic query routing, one-hop only
      • semantic overlay across the social graph
      • expertise awareness, expertise ranking, and utilisation
    • prior lab work: Web3Recommend: Decentralised recommendations with trust and relevance
    • AI that is collectively owned
      • evolve beyond open-weight towards permissionless innovation (fork model)
      • free-to-join
      • Towards Byzantine fault tolerant architecture
      • Sybil resilient
    • Continuous learning, social learning, etc.
    • Accepts code improvements, self-evolving human collective
    • Fine-tune only, no human builds their own civilisation from scratch
    • Dataset problem

What to focus on? Set deadline of https://euromlsys.eu/ upcoming Feb!
Prior Tribler sprint, back to central transformer.

  • get a transformer running and trained from scratch 😮
  • assume honest gossip learning, later: social graph, MeritRank
  • Input can be k=1000 output: perfect ranking, without overheating GPU?
  • Cardinal open question: can the entire thesis be on De-NeuralNDCG
    • honest model, actual attacks, epic sybil attack
    • personalisation, train on our own preference, re-rank
    • new item discovery, new queries
    • online learning, long-enduring learning, avoid over-fitting when running for a decade
    • Stronger baseline for transformer-based gossip learning
  • Live demo ability, make .GIF
    Kapture 2023-06-02 at 13 21 39

@mg98
Copy link
Contributor

mg98 commented Oct 31, 2024

I'm still working on getting a p2p simulation for my ranking algorithms on the DAS6. To that end, I have also started getting familiar with decentralizepy, the research framework for DL of our colleagues at EPFL. It's like pyipv8 for machine learning. I'm thinking of using it for my experiments.
Also, I got allRank to run with MSLR-WEB10K. I develop my code for this project in https://github.com/mg98/DecLeToR.

During our co-writing exercise this Tuesday, I developed a spontaneous idea in an attempt to make collaborative LTR work in a network with untrusted peers. Moreover, this idea could also generalize to other ML tasks.
I have received good feedback from our colleagues at the ML systems group. I'm still waiting for Martijn's feedback before pursuing it further.

The idea is very simple!

Every user has their local model, which they train on locally generated data (i.e., search behavior). It is also used for inference (in this case, the reranking of search results).
This model will effectively learn optimal personalized ranking. But it will do so slowly and unable to generalize to new queries. Hence, we would like to extend the user model's ability to what our peers have learned.

Blind model averaging is suboptimal because (1) peers' models do not always align with personal search interests and (2) the risk of byzantine nodes.

I propose a very simple protocol.

  1. Every time you perform a new search, you infer from your model, you select a result, your model will have a loss (between what you selected and how this result was ranked).
  2. You now sample k models from your randomized neighborhood.
  3. You simulate your recent search task on these models and see if one model yielded a loss lower than your own model.
  4. You slightly nudge your local model towards the "winning" model.
  5. You keep the winning model for the next round, and discard the others.

Hope that was clear enough. More formally:

Screenshot 2024-11-01 at 10 31 27

@synctext
Copy link
Member Author

synctext commented Nov 1, 2024

🦄 🦄 🦄 🦄 Seriously great idea 🦄 🦄 🦄 🦄

Never seen an algorithm 1 form the basis of a phd idea in past decade. That was the good news, the bad news is that simple ideas with remarkable performance are still difficult to publish. Reviewers really need to be convinced that you're not cheating, no selling snake oil, and have truly found something transformative. A new architecture for decentralised machine learning 😲
Obviously, the duplicating of models is a non-starter. Too expensive operation. So this needs an Algorithm II with IPv8 remote-query of random selected models. Critical element is replacing vertical/horizontal federated learning with optimal training data crawling. Then there is Algorithm III with semantic clustering, neighbours are biased to connect to nodes which are most similar.

{storyline example disclaimer: as systems people we do not know which exact ML terms to use. Whole little story below could be incorrect. Please check the papers}
Gossip learning represents a unique approach to decentralised federated machine learning. With random peer-wise exchanges it addresses high communication costs, stragglers, and fault tolerance issues. The gossip learning concept pre-dates the classic 2017 deployment paper by Google of federated learningREF. Gossip learning was conceptualised and deployed in 2013 REF1 REF2. This approach blends the paradigm of peer-to-peer with machine learning. However, the fully self-organising architecture also introduces critical trust and security issues. Early gossip learning required the strong assumption of honest behaviour by all participants.

Critical element is replacing vertical/horizontal federated learning with optimal training data crawling. The classic approach is to divide models vertically or horizontally across nodes. These small shards then fit into the limited resources of a single node, forming a distributed system. Our gossip learning approach is based on having autonomous smaller nodes and unique subset of training data for each node. Personal data can be used for training a node.

Bittorrent and Bitcoin are fully decentralised. No permission is needed to join such systems. Our gossip learning approach also is permissionless. By using a Pagerank-inspired algorithm we address the security concerns. ToDo: add details.

We now present a remarkably simple gossip learning algorithm based on the game theoretical concept of win-stay, lose-shift. This game theory strategy has been shown to outperform tit-for-tat in the classic prisoner's dilemma game.
It is simplification graph-based learning using random walks. (READ REFs: Learning from Heterogeneous Data Based on Social Interactions over Graphs 2) Adaptive Social Learning 3) Adaptation in Online Social Learning 4) Interplay between Topology and Social Learning over weak graphs)

{see details above} Screenshot 2024-11-01 at 10 31 27

dataset Please use ORCAS. Web10k is not human readable. Systems people love finding and fixing bugs. It is essential to get a feel for the performance of your code with actual queries and actual URLs. {query embeddings, no content anlysis or feature vectors.} Compare to (overly 😁) simple approach: pair-wise online learn-to-rank https://github.com/mg98/p2p-ol2r

related work Gossip learning at scale has some resemblance to Mixture of experts (MoE). MoE use multiple learners to divide a problem space into homogeneous regionsREF. These learners are located in the same machine. For instance, the Mixtral system uses "8 sparse mixture of Experts"REF with a gating function and confusingly named "router". Critical difference is that gossip learning uses network-based routing to other machines. Thus millions of "experts" can be selected in large-scale gossip networks.

ToDo: make a plot for next meeting

@devos50
Copy link
Contributor

devos50 commented Nov 6, 2024

Some feedback on the above stuff (also sent in a private message on Slack).

So the idea is to sample models from random nodes in the network, take the model with the lowest loss (on your training data?), and nudge your model update towards the model with the lowest loss? I believe in essence you're modifying the loss function and add and additional nudging factor term to it. There seems to be no model aggregation going on as well.

I just talked with the other DL experts in the lab, and we analyzed the algorithm a bit. We understand the insight behind it, but we're not sure what you're trying to solve. Are you aiming towards personalization? Or Byzantine robustness? Each of these objectives has different needs, and I also wonder how this would work in non-IID settings where the optimization trajectory is not as smooth as in an IID setting. The idea of evaluating different neighboring models is closely related to another paper we recently worked on. Nevertheless, it's difficult for us to make predictions on whether it works or not - it depends on many factors. Also, for now I assume it's a synchronous algorithm. Asynchrony and stale updates is going to mess significantly with your loss function.

It would be interesting to theoretically analyse if it converges (remark: this is a must have to have a chance in ML/system conferences nowadays!!). My feeling says it does since the differences between local models are becoming smaller so doing an infinite amount of rounds should make sure all nodes end up with the same model? But I'm not really sure since the else statement can also trigger, meaning that models are diverging (different nodes doing different model updates lead to divergence between these models).

Sometimes, a good way to see if it works or not is to just implement it and see what happens. It's an approach we often do as we have a lot of random ideas on how to improve DL.

Line 2 seems to suggests that you're doing a training step per sample? Usually one takes a mini batch to ensure more accurate gradient estimations.

Small note, in line 7, you can replace L(y, f_local(x)) with L_local (computed in line 3). Also, the gradient computation step seems to be missing from the algorithm.

The gossip learning concept pre-dates the classic 2017 deployment paper by Google of federated learningREF. Gossip learning was conceptualised and deployed in 2013 REF1 REF2. This approach blends the paradigm of peer-to-peer with machine learning.

Small remark that the domain of distributed optimization has been actively researched since the early 2000s already. The main difference is that these works all focus on simple problems with a convex optimization landscape. Using deep neural networks in a P2P setting is a newer area of research indeed.

Update

I was thinking a bit more about your algorithm. On second thought, it is more resembling of Gossip Learning than D-PSGD actually because ultimately you are only “aggregating” a single model if it exhibits a lower loss (although it has elements of both algorithms). So, GL should be a baseline as well I guess. To fairly compare these algorithms, you might want to implement a synchronous version of GL (I assume your algorithm is also synchronous?) and measure the performance of all these algorithms in terms of rounds (and not absolute time).

Also, I was thinking that the extent of collaboration and learning from other nodes might be rather low, of course depending on many different parameters (sample size, nudging factor etc). This is good for Byzantine resilient, since (as you also pointed out) you only trust yourself. Nevertheless, this only brings you so far in terms of achieved accuracy. While I am confident that the accuracy of your approach compared to a no-collaboration scenario will be better (because models still influence each other), it might not outperform SOTA personalization methods that more aggressively/strategically integrate the knowledge of others. Given that you want to focus on personalization, these algorithms should also be a baseline. BTW, Both DL and FL have this interesting trade-off between doing more local training and model aggregation. Your algorithm might be more on the side of local training while having an occasional nudge by the models of other nodes. Quantifying how much you learn from others would be a very interesting experiment to conduct.

Finally, your algorithm is heavy on communication costs since each node has to download k models (potentially large ones!) every round. Then, you only integrate the knowledge of at most a single received model while discarding the knowledge in all other models, which seems wasteful. It might very well be that the parameter updates in slightly sub-optimal models in the long run will lead to better performance of your own model since they help you to generalize knowledge. Also, note that your compute costs also increase since each node needs to do a forward pass on each received model each round. This is an important factor to keep in mind, so unless your paper is purely theoretical, this is something you might want to address.

Nevertheless, this is an interesting starting point to explore different dynamics!

@synctext
Copy link
Member Author

synctext commented Nov 6, 2024

Thnx Martijn!

Are you aiming towards personalization? Or Byzantine robustness?

Aim is more like 1963, permisionless communication (e.g. Internets). 😁 It offers permisionless intelligence (with my lower communication cost modifications in comment above).
Would reviewers see that as novel? It is the type of stuff we can prototype. Obviously with a final deployment to our 50k Tribler community...

(Context: Global Brain issue)

@devos50
Copy link
Contributor

devos50 commented Nov 7, 2024

Decentralized learning in permissionless settings is certainly novel. DL is mostly researched in enterprise settings (hospitals, IoT, military, etc - also see this article) and the research we do also assumes such settings.

That said, Byzantine behaviour is a serious treat to convergence, especially in non-IID settings. Thus, most research in this area assumes a maximum fraction of Byzantine nodes, like 1/3, similar to research on consensus. In permissionless settings, such a threshold is not really applicable as potentially any node can send arbitrary model updates to other nodes. It also depends on the threat model one considers - passive attacks (e.g., honest-but-curious attackers) are much easier to deal with than active attacks such as the Sybil Attack.

In fact, the above algorithm that conservatively integrates model updates from others might work in permissionless settings. But in very adversarial settings, I wonder if it's worth collaborating and consuming the bandwidth for model exchanges in the first place. Perhaps training a model from scratch/fine-tuning a pre-trained model locally might yield sufficient utility for the envisioned use case.

@mg98
Copy link
Contributor

mg98 commented Nov 25, 2024

Update from my side regarding the LTR project:

I've been trying to get familiar with EPFL's decentralizepy framework. I think it's the right choice for this project. Moreover, I decided it will be a good investment, be it for the Trusted Decentralized Learning idea discussed here with Martijn or other future projects.

Unfortunately, doing that and trying to get allRank to work within decentralizepy and the WEB10K dataset ate up 2-3 weeks of my time 😩 This weekend, however, I finally succeeded.

What that means is that nodes can train WEB10K collaboratively and decentralized, via dataset sharding and model parameter gossip and aggregation, using the transformer-based LTR model and neuralNDCG loss function from the allRank framework. This setup would now allow me to run simulations on, e.g., byzantine attacks, enabling me to do experiments for my Algorithm 1.
Further note: Clicklog gossip is not supported in decentralizepy. For that, we should resort back to pyipv8.

@synctext criticised about the MSLR WEB10K/WEB30K datasets that they don't contain raw queries or documents. Indeed, it is a standard for LTR datasets to come with nothing but relevance labels, query ids, and an esoteric query-document relationship vector, since this is enough to benchmark LTR models.

People from Heidelberg University shared the sentiment in 2016:

The disadvantage of these datasets is the fact that they do not provide full texts but only pre-processed feature vectors.

While it's hard to find a dataset that does (perhaps for privacy reasons), Heidelberg people did something clever. They scraped NutritionFacts.org, which contains articles, blog posts, Q&A threads, videos. If one page links to another page, it is the metadata of the referral page (blog post/video title, video description, keywords) that are used as query to the linked page. There's more to that, but this far is abstract enough.

Anyhow, this leaves us with the only suitable dataset in existence (that I found) until we have more Tribler data. Even though it will require some annoying post-processing from my side for it to work with our setup 😫.

I ran our crawler yesterday for <1 hour to see how much we can fetch. Only got 30 distinct queries.

@synctext
Copy link
Member Author

synctext commented Nov 25, 2024

  • Natural paper is to compare model parameter gossip versus data (clicklog) gossip
    • personalised: only need part of the huge dataset
    • the one relevant to your style,interest,history
    • Golden-First query: a new user that asks the network a query for the first time
      • Explore the network to find similar users
      • Use Golden query to find the most similar users (AOL4PS dataset has personalised search profiles)
      • Random overlay versus semantic overlay with most-similar users as neighbors ☮️
    • IPv8 + AllRank would scale to millions of users and run since Apr 2005 (start of Tribler)...
      • key cost of scaling is 1) bootstrap by finding most similar users 2) cost of cloning a trained model 3) user fine-turning 4) total cost of supporting 1 additional user (e.g. scalable model of intelligence?)
    • Decentralisepy + AllRank random gossip of parameters. Converges to global average, no personalisation 🫢 😮
      • personalise, avoid 1 unified model that fails slightly for everybody
      • not random, but semantic
    • Compare the cost models of IPv8 and decentralisepy
      • IPv8 clicklog gossip MByte usage of training data
      • decentralisepy MByte usage of model parameters
  • https://github.com/terrierteam/aolia-tools
  • ORCAS has a body, and Title for actual clicks 😮 https://ir-datasets.com/msmarco-document.html#msmarco-document/orcas
  • MSMARCO with a body, and text: https://ir-datasets.com/msmarco-passage.html#msmarco-passage/train
  • TripClick has both non-click (higher ranked) and clicked info, very unique https://ir-datasets.com/tripclick.html#tripclick/train/hofstaetter-triples
  • Our trusty old Middleware is suddenly leaning into Machine Learning 😲 We have been at that conference for decades, really, and with the 2025 edition the industry track has ML/AI in 6 cfp sub-topics

2 week sprint goal: performance graph from IPv8+Allrank please, explore datasets (simple protocol, keep 20 neighbours, most similar ever seen)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

4 participants