🚧 This prototype is actively under construction. 🚧   Please provide feedback at github/danielsgriffin/SearchRights/issues or via email to daniel.griffin@berkeley.edu.



Individual marks:

This table is optimized for larger screens. Rotate your device to landscape mode for the best experience.

Example searches

Help users practice new ways of imagining and formulating queries.

Research suggests that effective prompting of LLMs can be challenging (Zamfirescu-Pereira et al., 2023) and query formulation even in mainstream web search engines is complicated (Tripodi, 2018). Mollick (2023), not specifically re search systems, has called for “large-scale public libraries of prompts”.

Are there default example searches?

Comment: It may be interesting to also consider whether these queries demonstrate an advantage of the generative search capabilities or not.
Do default example searches have contextually relevant explanation on sourcing?
Is there a "searchable repository of examples"?

The phrase "searchable repository of examples" comes from Zamfirescu-Pereira et al., 2023. They point to a "prompt book" for DALL·E. Other examples, also in the image generation domain, include Lexica (marketing itself as "The Stable Diffusion search engine"). See also the searchable LangChain Hub prompt repository for developers.

`# show-and-tell` channel in the Discord channel: 'Show off what you've found with Metaphor!'

Sharing searching

Support users in sharing their search experience with others.

Research demonstrates significant value in users communicating about their experiences with tools. Better support for sharing interactions with these systems may improve users collective ability to effectively question/complain, teach, & organize about/around/against these tools (i.e. “working around platform errors and limitations” & “virtual assembly work” (Burrell et al., 2019), “repairing searching” (Griffin, 2022), search quality complaints (Griffin & Lurie, 2022), and end-user audits (Metaxa et al., 2021; Lam et al., 2022)) and improve our “practical knowledge” of the systems (Cotter, 2022).

Can you share a link to the results or conversation?

Is there a share interface? For example, this is supported on OpenAI's ChatGPT and Google's Bard. It is not supported on Anthropic's Claude.
Is there an explanation of how the share interface works provided or linked to in the interface itself?


(1) The share interface in OpenAI's ChatGPT has a "More Info " link to their FAQ page: ChatGPT Shared Links FAQ:
OpenAI's ChatGPT share interface showing a more info link at the bottom.

(2) The share interface in Google's Bard has a "Learn more" link to a Bard Help page: Share your Bard chats: Bard share chat interface, showing a learn more link at the bottom.
Do shared links update if the sharee's interaction with the system continues?

Example: Here is the interaction in ChatGPT, per ChatGPT Shared Links FAQ:
If I continue the conversation after I create a shared link, will the rest of my conversation appear in the shared link?

No. Think of a shared link as a snapshot of a conversation up to the point at which you generate the shared link.

(1) Links to searches reproduce the search from the URL.

(2) Metaphor does not have follow-on queries/questions.

Is there a social share card?

Quality of share cards may influence the engagement on social platforms.

Links to searches produce a social share card.

Are share links (or links to searches) indexed and searchable in search engines (or the system itself)?

Permitting share links to be indexed by search engines may allow searchers elsewhere to find and evaluate the quality of the system responses. Depending on the disclosure in the interface this may pose a privacy risk to users. It may be viewed explicitly as a growth strategy by a search system.

Evaluation details: I am currently looking only at `site:` searches on Google.

See a links to conversations about this on Twitter here.

You can find links to searches in Google: g[site:https://metaphor.systems/search].

Note: There are no 'conversations' on Metaphor.

Google result count: 6

Standards & Openness

Support interoperability and extensibility.

This includes checking if a platform supports searching by URL and if it offers an API.

Coming soon: Examine support for the OpenSearch protocol (mdn
GitHub Wikipedia); record comments about open source; record contributions to & integrations from open source.
Can you search by URL?

Searching by URL has long been a staple approach to searching. This allows people to link to (live) searches (to bookmark or share with others) and to integrate searching within simple scripts. This also supports auditing.

Example: What is an LLM?

Is there an API for search?

Searching by API may better support evaluations of the search results and the development of modifications or extensions building on or with the search system.

We do not currently evaluate the performance of the API or restrictions on access. You.com, for instance, says this:
If you are interested in being an early access partner please email api@you.com with your use case, background, and expected daily load.
Other search engines that provide some form of API include the Brave Search API, Google (through their Programmable Search Engine's Custom Search JSON API), Microsoft's Bing Web Search API, and the Yandex Search API.

You can also get programmatic or tool-based access to web search results from many third party providers that scrape search results from search engines. For example, academic researchers have conducted research using data from SerpAPI (Zade et al., 2022 ) and Ahrefs (Williams & Carley, 2023).

Metaphor announced beta access to their search API on Apr 21, 2023. It is now generally available at platform.metaphor.systems

Does the system provide open-source models?

Providing open source models can advance the general search landscape as well as help users better understand the capabilities and limitations of the search system.

Supporting feedback

Help users develop, submit, and share feedback.

Mechanisms to support feedback will shape user expectations, experiences, and future improvements of the search landscape. Some approaches may excessively conceal or otherwise control complaint rather than focusing on the users.

Is there a clear contextually relevant feedback method for core search functionality?

This is distinct from having a Contact or Feedback link buried off-page or in a footer.
Do users receive a copy of feedback?
Not yet evaluated
Is the feedback record easily shared by the user with a web-accessible link?
Not yet evaluated
Can users opt-in to contribute feedback to a commons-based infrastructure?

Sharing feedback from users in a transparent and privacy-preserving manner may improve the larger search landscape. Something like commons-based infrastructures, interfaces, or record repositories already exist for other factors, like IndexNow, Lumen, the ClaimReview schema, robots.txt, the OpenSearch protocol, research datasets and benchmarks, etc.

These are resources that search systems may contribute to, respect, or draw on.
Research shows that search systems already rely heavily on Wikipedia (see especially, McMahon et al. (2017 )).

Note: The proximate cause of adding this criteria was the responses from the CEOs of Perplexity AI and You.com to a Oct 15, 2023 comment from Yann LeCun (Chief AI Scientist at Meta):
Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style.

It is the only way for LLMs to become the repository of all human knowledge and cultures.

Who wants to build the platform for this?