Patenting can be a long and costly process, and the uncertainty of the outcome can greatly impede the patentee’s ability to budget and plan effectively. In an ideal world, a practitioner (and by extension, their client) would know which art unit will be assigned to examine a case prior to the completion of drafting. Knowing that the outcome of prosecution is more likely to be negative, costly, and/or narrow could make all the difference for applications likely to fall in a small set of truly challenging art units. With this knowledge, the client and practitioner may revise the application to better the odds of a more favorable outcome.
The good news is that machine learning and analytics have matured to the point where one can “look into the future” and distinguish outcomes for a patent application with various probabilities. Access to such information and predictions is now available, for a price. However, not all output is equal. Given the same raw data, technology applied to analyze the data can yield different results.
For example, let’s say you are the practitioner drafting an application for a combined search and payment system, and you have a first claim that starts as follows:
A transaction method, comprising: managing, by an online mall server, physical store information; receiving login by a user terminal; searching, according to a selected list of commodities submitted by the user terminal, for information on physical stores having a commodity in the list; ...
You submit the draft application that includes this claim to a machine system (in this example, TurboPatent®) that compares it to stored linguistic patterns present in the recent history of every art unit and returns the five best possible matches (best fit first):
The accuracy of this analysis can be evaluated by matching these predictions to known outcomes. For example, the results above are based on a published application, US20160005102A1, for which one can validate in PAIR that 3684 is indeed the art unit examining the case, which in recent months has had an allowance rate of 33%. Had this predictive analysis been executed in advance of filing, the practitioner and client would have been forewarned, by an objective, data-driven process, that they could likely expect substantial headwinds in prosecution. They might have decided to amend the wording and other features of the application before filing, to improve the odds of assigning the application to a less challenging art unit.
The results are different when entering the same application into a different machine system (e.g. Pathways from LexisNexis):
The analysis of the same input did not predict the art unit to which the application was ultimately assigned for examination or even a neighboring art unit in the correct group. In a recent test on approximately 700 randomly selected granted patents, the TurboPatent machine system predicted the correct art unit assignment 50% more accurately than the Pathways system.
Accurately predicting an art unit requires some cutting-edge technologies, such as natural language processing, machine learning, specialized algorithms, and big data analytics. But even then, predictions can vary widely based on the parameters used in the analysis. For example, Juristat recently posted a small sample of their allowance rate statistics to IP Watchdog. In our analysis, the Juristat rates are based on data collected and averaged over a long period of time, possibly up to 10 years. While one might assume a larger data set would lead to more accuracy in predicting the future, this is only true if the additional data are representative of current trends. The USPTO is continuously changing its processes and responding to developments in the Federal Circuit, meaning that data averaged over a long time span may be less relevant for the IP you’re filing now. In the case of the Alice Corp. decision, change was abrupt and lead to major swings in the allowance rates of many art units. Ideally, prediction algorithms should factor in the time period over which data has been averaged and developments that can directly impact the relevance of that data.
As a comparison, the TurboPatent machine system utilizes allowance rates for nearly all art units based on a more focused timespan—January 1, 2015 through April 3, 2016. Figure 1 is a histogram of art units by allowance rate, divided into 1% wide bins. Art units with fewer than 100 clear dispositions were excluded.
The data lie in a distribution skewed toward allowance rates over 50%. Indeed, 93% of art units have an allowance rate over 50%, with a broad plateau in the 70's and 80's. The art unit with the median allowance rate is 2122 in the group "Miscellaneous Computer Applications" with a rate of 79%.
Of greater interest is the long tail in the opposite direction, the 7% of art units with allowance rates below 50%. Figure 2 is a histogram breakout of the long tail of art units with low allowance rates, divided into bins 4% wide. The extreme outlier, at a bare 3% of cases allowed, is Art Unit 3689, in one of the three groups of art units devoted to business methods. The tail is dominated (23 out of 36) by the three groups of art units in the ranges 362X, 368X, and 369X, which is consistent with expectations following from the subject matter headings of those groups: "Electronic Commerce," "Business Methods," and "Business Methods - Finance," respectively. (The USPTO maintains a guide to the subject matter area of each group of art units here.) The remainder are divided between Tech Centers 1600 and 1700, with the exception of two oddballs: 3715, "Amusement and Education Devices," and 2489, "Recording; Compression."
Standards of patentability, eligibility, and examination practices are ever evolving. Given the major cases of recent history, Bilski, Myriad, Mayo, Nautilus, and Alice Corp., and their respective impacts, we know the only constant is change. With this in mind, TurboPatent will periodically make available up to date versions of the data published today.