Developed by Mainak Ghosh, Sebastian Erhardt, Michael E. Rose, Erik Buunk, and Dietmar Harhoff, PaECTER (Patent-Level Representation Learning Using Citation-Informed Transformers) uses advanced transformer-based machine learning techniques fine-tuned with patent citation data. The model is specifically designed to address the complex challenges of patent text analysis and provides significant improvements in the identification and categorization of similar patents, making it highly valuable for both patent examiners and innovation researchers.
The new NBER working paper “Patent Text and Long-Run Innovation Dynamics: The Critical Role of Model Selection” rigorously compares PaECTER with other Natural Language Processing (NLP) models. The authors Ina Ganguli (University of Massachusetts Amherst), Jeffrey Lin (Federal Reserve Bank of Philadelphia), Vitaly Meursault (Federal Reserve Bank of Philadelphia), and Nicholas Reynolds (University of Essex) assessed the models’ performances in patent interference tasks, where multiple inventors claim similar inventions.
The study concluded that PaECTER significantly reduces false positives and improves efficiency compared to traditional models like TF-IDF (Term Frequency – Inverse Document Frequency). The study also highlighted PaECTER’s capabilities when compared with other modern models such as GTE and S-BERT (Generalized Text Embedding and Sentence-BERT as methods for representing texts in the form of numerical vectors that capture semantic information about words or entire sentences). While PaECTER performed exceptionally well in expert-driven tasks like interference identification, it also held its own in broader patent classification tasks, further reinforcing its versatility.
“We are pleased that PaECTER’s performance has been validated by the NBER study, which shows its strengths in patent similarity analysis and confirms its role as a reliable tool for those working in the field of innovation and intellectual property,” says Mainak Ghosh, one of PaECTER’s developers. “This independent validation further strengthens its relevance in the field of patent examination.”
The PaECTER model is available for use on the Hugging Face platform, making it accessible to researchers, policymakers, and patent professionals worldwide. Its robust performance, as demonstrated by the NBER study, underscores its value in improving the way patent data is processed, contributing to more accurate and efficient analysis of patent innovations over time. As of today, PaECTER has been downloaded more than 1.4 million times.
More information:
PaECTER on Hugging Face
Ganguli, Ina; Lin, Jeffery; Meursault, Vitaly; Reynolds, Nicholas F. (2024). Patent Text and Long-Run Innovation Dynamics: The Critical Role of Model Selection (No. w32934). National Bureau of Economic Research. Available at https://www.nber.org/papers/w32934
Ghosh, Mainak; Erhardt, Sebastian; Rose, Michael; Buunk, Erik; Harhoff, Dietmar (2024). PaECTER: Patent-Level Representation Learning Using Citation-Informed Transformers, arXiv preprint 2402.19411. Available at https://arxiv.org/abs/2402.19411