Other

Using iphere AI Trademark Search: KIPRIS Queries and Similarity Analysis

iphere editorial · 5/10/2026
Using iphere AI Trademark Search: KIPRIS Queries and Similarity Analysis

Before filing a new trademark, the first wall is figuring out whether the mark can register with KIPO. Answering that means searching prior marks on systems like KIPRIS and comparing them — but the search-query craft and similarity judgment are squarely attorney work, hard for non-specialists to handle directly.

iphere's AI trademark search breaks the workflow into two stages. Stage one automatically generates the KIPRIS query and pulls a candidate list; stage two runs one-to-one phonetic and conceptual similarity analysis on the marks you flag. It is built to support attorney judgment, not replace it — and the practical value sits in how the results are read.

Stage 1 — AI KIPRIS Query Generation

You enter the mark (Korean, English, or both), the search text extracted from any image element, and the designated goods (NICE classes plus item names). The server calls Anthropic Claude Sonnet 4.6 with a structured prompt and returns more than a query: extracted keywords, removed words with reasons, phonetic/script expansions, the final query, and a short reasoning string so you can see why each move was made.

The model is instructed by six markdown guides authored by attorneys — covering query rules, class context, weak-distinctiveness stopwords, Supreme Court precedents, phonetic rules, and attorney tricks. Editing the guides reflects in the next search within 60 seconds (cache TTL), with no migration or redeploy. The system stays in sync with new precedents and class context as soon as the firm updates the files.

KIPRIS Query Operators and Expansions

KIPRIS advanced search uses its own operator set, and the AI emits queries in that exact format. The core operators are OR (+), AND (*), and a one-character wildcard (?). Each keyword is auto-expanded into 'original + transliteration + phonetic variants + wildcard' form.

OperatorMeaningExample
+ORChanel+샤넬
*AND(Star+스타)*(Bakery+베이커리)
?Wildcard for one character샤넬? — 샤넬 + 1-char variants
InputAI-generated query
Chanel (샤넬)(샤넬+Chanel+Channel+샤넬?)
Star Bakery(Star+스타+Sta?)*(Bakery+베이커리+Bake?)
Three or more keywordsOnly the first two are used (KIPRIS limit)

Not Just a Query — Four Pieces of Output

The AI does not just hand back a query string. The UI shows extracted keyword chips, removed-word chips with reasons, the editable final query, and the result table. You can see why a low-distinctiveness word was dropped or why a transliteration was added, and override anything before re-running.

  • Extracted keywords — terms the AI considered worth searching
  • Removed words — class-specific stopwords or filler with the reason
  • Phonetic/script expansions — Korean and English variants per keyword
  • Final query plus reasoning — drop into KIPRIS as-is, with a short explanation

Stage 2 — Phonetic and Conceptual Similarity Analysis

Once a result table is on the screen, tick checkboxes on up to three marks of concern and click 'Compare phonetic and conceptual'. The system fires a separate AI call per selection and returns a one-to-one comparison between your applied-for mark and each chosen mark, scoring sound and meaning similarity.

Result Structure

Each comparison returns a verdict (similar / not similar), a risk level (high / medium / low), a phonetic breakdown, a conceptual breakdown, an overall comment, cited precedents, and a recommendation. Each dimension can score independently, and the verdict combines them.

Result fields
Verdict
Similar or not similar
Final one-word call
Risk level
High / medium / low
Refusal probability proxy
Phonetic
Syllable split + score
Initial, final, and total syllable comparison
Conceptual
Meaning comparison
Proper nouns, semantic content
Cited precedents
1-2 cases
Real case numbers from the guide pool
Recommendation
Register for review or pass
Next-action hint

How to Read Risk and Recommendation

Treat the AI output as a first-pass filter, not a decision. A 'high' result does not always mean abandoning the application, and 'low' is not a guarantee of safety. The risk and recommendation are most useful as a way to prioritise attorney review and to standardise the next step.

RiskTypical patternSuggested next step
HighSame initial syllables + short total length + same classOpen as a review matter — design-around or coexistence talks
MediumPartial syllable match + partial class overlapLoop in the attorney for a judgment call
LowClear sound/meaning differences + different classesLikely passable; final attorney check only

Hallucination Defence — Precedent Verification

Fabricated case numbers ('hallucinations') are the worst failure mode in legal AI. iphere counters this by matching every cited case number against a whitelist drawn from the precedent guide (03-precedent-cases.md). Any citation the model invents that is not in the guide pool is stripped before the result is shown. Adding new precedents to the guide is the way to widen the citation base.

Credit Cost

Each stage consumes user credits. Stage one (query plus result fetch) is one charge; stage two charges per selected mark. If your balance is insufficient, the call is blocked before charging, so partial deductions never happen.

StepCreditsUnit
AI KIPRIS query + result fetch3,000per search
Phonetic + conceptual 1:1 compare5,000per selected mark
Three-mark comparison (example)15,0005,000 × 3

Frequently Asked Questions

Q1. What if the AI query is too broad or too narrow?

Edit the query in the preview box and re-run. Add or drop keywords, trim phonetic expansions, remove the wildcard — anything goes. Click on a removed-word chip to bring it back into the query along with the AI's stated reason if you disagree with the stopword call.

Q2. The verdict is 'not similar' — can I file with full confidence?

The output supports the decision, not makes it. Visual analysis is excluded, and there are refusal grounds beyond phonetic/conceptual similarity (e.g., dilution, class scope outside the comparison set). A 'not similar' verdict needs to be combined with attorney review of visuals, real-world use, and class coverage before signing off.

Q3. Why are there so few cited precedents?

Hallucination control. Citations are filtered against the precedent pool in the guide MD. New precedents added to the guide flow into the next analysis. If the citation density feels low, expanding the guide pool is the most direct way to improve it.

Q4. Will the same mark return identical results every time?

Generation calls an LLM, so identical inputs can produce minor variations, and KIPRIS data updates also shift the underlying results. Treat the tool as a judgment aid rather than a reproducible record. When a record is needed, capture the query and result snapshot at the time of search or save the analysis to the matter.


Try iphere AI trademark search now

From query generation to one-to-one phonetic and conceptual analysis — the workflow saves directly into the matter for attorney review.