I’ve published a design consultation for the computer-aided tagging tool. Please look over the page and participate on the talk page. If you haven’t read over the project page, it might be helpful to do so first. The tool will hopefully be ready by the end of this month (October 2019), so timely feedback is important.
This is extremely interesting technology and in continuously development. I know this tool is mostly efficient on paintings (because of Google Arts & Culture), but I have also tried the API on this image. Of “depicts (P180)” it marks a high score (above 70%) on tie, man, and person. Are that the data this tool is adding?
It also recognise 4 of 5 persons in the image, that is impressive (George H. W. Bush, George W. Bush, Bill Clinton, and Jimmy Carter for those who are wondering).
Another question is if we can “feed it” with already set depicts? If that is passible, it will most likely work even better.
Thanks for the questions.
Yes, that is the data the tool is working with, the depicts statements.
We will not be able to train it that way. It should learn over time from what we add, which will be helpful (and available as public data dumps for others to train models).