EP0617381A2 - Character recognition - Google Patents

Character recognition Download PDF

Info

Publication number
EP0617381A2
EP0617381A2 EP94830118A EP94830118A EP0617381A2 EP 0617381 A2 EP0617381 A2 EP 0617381A2 EP 94830118 A EP94830118 A EP 94830118A EP 94830118 A EP94830118 A EP 94830118A EP 0617381 A2 EP0617381 A2 EP 0617381A2
Authority
EP
European Patent Office
Prior art keywords
character
vector
features
topological
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP94830118A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP0617381B1 (en
EP0617381A3 (en
Inventor
Girolamo Gallo
Flavio Lucentini
Cristina Lattaro
Giulio Marotta
Giuseppe Savarese
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Texas Instruments Inc
Original Assignee
Texas Instruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Texas Instruments Inc filed Critical Texas Instruments Inc
Publication of EP0617381A2 publication Critical patent/EP0617381A2/en
Publication of EP0617381A3 publication Critical patent/EP0617381A3/en
Application granted granted Critical
Publication of EP0617381B1 publication Critical patent/EP0617381B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/184Extraction of features or characteristics of the image by analysing segments intersecting the pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • G06V30/19013Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • This invention relates to a method and apparatus for character recognition, particularly the recognition of letters, numerals and other symbols such as Japanese characters, musical symbols, simple drawings and the like.
  • instructions or data can be input by the user of the system by means of a pen-like stylus on an input/output screen.
  • An example of such a screen is a touch sensitive monitor.
  • the input instructions or data can be handwritten script, numbers, or any other form of handwritten character.
  • This system is relatively complex to implement using dedicated hardware for stroke analysis due to the detailed stroke analysis which must be conducted. In addition it relies on a personalized data base for each user. This can mean that a large amount of memory is wasted if a given system is used by many users. If no personalized data base is set up the degree of recognition of the system can fall to an unacceptable level. Since this system relies only on stroke analysis it is important for all users to write the characters in a predetermined conventional manner. Recognition is severely inhibited when the user adopts a writing style where the strokes are different from the predetermined conventional manner. Even though the free style characters may look very similar to the conventional characters, recognition may not be accurate since the stroke analysis indicates a different character than that which has been written.
  • One object of the present invention is to provide a character recognition system which overcomes at least some of the disadvantages of known systems.
  • Another object of the present invention is to provide a simple character recognition system which has improved recognition rate even when no personalized data base is utilized.
  • a method for recognising a script written character comprising the steps of entering the character using character enter means; digitizing the character; storing the digitized character; extracting topological features and vector features of said character; comparing the topological and vector features of the character with a set of reference topological and vector features stored in a memory, each of the set corresponding with a specific character; and performing a logic process to determine which of the set of reference features most closely corresponds to the topological and vector features of the digitized character, thereby recognising the script written character.
  • the system of the invention has the advantages that there is no need for time consuming complex analysis of the character and that a data base will provide recognition of about upwards of 98% of all characters input by a user.
  • a character recognition system is shown generally at 10.
  • the system comprises a micro-computer 12 having a monitor 14.
  • the computer is connected to a touch sensitive screen, for example an LCD graphic digitizer 16. It is possible to replace the monitor with a touch sensitive monitor and dispense with the separate touch sensitive screen .
  • a pen-like stylus 18 may be used on the touch sensitive screen to input instructions, data or the like to the computer.
  • a user can write commands directly onto the screen. For the most accurate recognition of the commands the characters should preferably be entered in a frame shown generally at 20.
  • the touch sensitive screen menu used by the user to enter information is shown in Figure 2.
  • the main options available are shown in blocks 22 and may be activated by touching the stylus on the appropriate box 24a-f.
  • the frame 20 is shown in greater detail in Figure 3. It comprises a top frame 26, a middle frame 28 and a bottom frame 30.
  • Each character should be entered into one column 32. The position of the character is used to aid in the recognition of that character.
  • "short" lower case letters such as a, c, e, m, n, o, r, s, u, v, w, x and z should be entered in the middle frame and should preferably be of such a size that they do not extend into the other frames.
  • Taller lower case letters such as b, d, f, h, i, k, l, and t should have a larger vertical size than the smaller lower case letters and should be entered in the top and middle frames.
  • Lower case letters with "tail” such as g, j, p, q and y should preferably be entered such that the tails extend into the bottom frame.
  • Upper case letters and numbers should generally be entered in the top and middle frames of any given column 32.
  • Other characters such as punctuation for example should be entered in their normal positions relative to the alphanumeric characters. The recognition of punctuation is not discussed in this document but is set out in our co-pending application (TI-17480).
  • Adhering to the frame constraints illustrated above allows the system to recognize the difference between "c" and "C” for example. It is possible for the operator to set the size of the frames or to use the standard frame. In addition for those alphabets such as Japanese characters which do not have symbols of the same shape but different sizes; frame constraints are not necessary and the user can enter the characters in a box without any frame constraints. The same holds if the system is used for geometrical symbols or drawing recognition. The full detail of how the frame constraints aid recognition will be described in more detail below.
  • the output of the digitizer is downloaded to the computer into what is called a stroke format file (STK file).
  • STK file a stroke format file
  • the character is described by a sequence of strokes each of which is made up of a series of coordinates between a pen-down and pen-up condition.
  • Figure 4 shows an ASCII format translation of the content of a stroke file in binary form of the letter A.
  • the output from the digitizer is then "pre-processed" before character recognition is implemented. This pre-processing comprises character scaling, centering, digital noise filtering and fitting the digital information into a pixel matrix of a given size, for example, a 16x16 matrix.
  • a recognition process which comprises three basic steps: extraction of the topological characteristics (features) of the input handwritten character; character vector code determination; and recognition by 'fuzzy' comparison with a set of reference characters stored in a memory.
  • the handwriting recognition is performed on each individual character once it has been written. This allows for total freedom in writing the characters according to the user's style, i.e. it is not necessary to enter the character as a particular sequence of strokes. This can allow for variations in the direction of movement of the pen and corrections can be effected after the characters have been originally written.
  • the recognition is mainly based on symbol's optical characteristics. To solve ambiguities due to the optical aspect of the characters such as "S" and "5" a vector code is determined for each character. This vector code determination will be described in more detail below. As previously stated, the recognition of uppercase and lowercase letters with the same shape (e.g. 'c' and 'C') is improved by the user writing in fields or frames.
  • This particular system is optimized for characters as well as all other signs usually found in a standard computer keyboard.
  • the system has been optimised for all alphabets, Japanese characters, geometrical symbols and simple drawings.
  • the principle is valid for all other alphabets.
  • the system can be used in a writer dependent mode, in which it shows better recognition rate.
  • very good recognition is possible when used as a writer-independent system, i.e. without the system training.
  • the user can enter his own reference character set through a user friendly character mode. Once adequately trained, the system can accept any handwriting style. For example, one user may train the system to recognize multiple character styles for a given alphabet letter, or his special way of writing a given character. Alternatively, the system can be used in "untrained mode" with a standard set of characters.
  • the pre-processing initially converts the STK format described above to a pattern (PTN) format of a normalized 16x16 matrix. This PTN format is shown in Figure 5. The remainder of the character pre-processing is carried out as follows:
  • Point Interpolation/Smoothing occurs as the digitiser converts points at a constant rate, the distance between two consecutive points is proportional to the pen speed. If the writing speed is low, some points can be very close to each other resulting in a digitising noise. Alternatively, if writing speed is relatively fast, points can be well spaced from each other, leaving holes in the stroke. To overcome this drawback, an interpolation/smoothing routine is used to add interpolated points whenever points are too far apart and remove points which are too close together;
  • Character boundaries are evaluated and the points are scaled to fit into a normalized 16x16 matrix.
  • X and Y components are scaled with different scaling factors to completely fill the matrix. If the ratio of either X-size/Y-size or Y-size/X-size is larger than a given threshold value (typically 4) only the larger component (X or Y) of the character is scaled. This is to avoid improper expansion of 'slim' characters such as "I" or "-"; and
  • Character mapping which involves the number of points belonging to each of 16x16 pixels being counted and their average value computed.
  • a threshold value (about 1/8 of the average value) is used to establish if a given pixel of the matrix is to be black (1) or white (0). If the number of points in a given pixel is greater than the threshold, the corresponding bit is set to 1, otherwise it is set to 0.
  • Additional data are added to the character matrix to specify the character identifier (only available if the character is a member of the reference set), these data include the number of strokes of the character and the frame position. This information will be used during the recognition process, as will be described below.
  • the frame position is coded by assigning a score to each of the three writing frames: 1 for the top frame 26, 2 for the central one 28, and 4 for the bottom one 30.
  • Frame numbers are evaluated by checking the character position on the writing field.
  • the active area of the top and bottom fields is actually slightly smaller than indicated by the frame lines. This means that the actual boundary between the central and top frame is slightly higher than shown and the boundary between the central and bottom frame is slightly lower than shown. The amount of this shifts may be adjusted by the user.
  • the result of the character pre-processing stage is a 16x16 matrix in pattern format, each element of the matrix being a pixel of the character image. Allowed values for the given element are 0 (white pixel) and 1 (black pixel) as is shown in Figure 5.
  • a header data line containing the following information:
  • the header data line may also contain character vector codes.
  • the first step of the recognition process consists in extracting from the original 16x16 matrix and from the STK character format a topological and dynamic (vector) description or code of the characters. Recognition is then performed by comparing input character code with the codes of the reference characters collected by the user during the learning phase or from the standard set in the memory.
  • the code basically contains four types of information:
  • Feature extraction is the key step of the recognition algorithm. It performs a faithful description of character's topology to make recognition very accurate. This operation is performed by using a 'topological feature set'. It consists of the 99 16x16 matrices shown in figure 6 representing elementary geometrical structures. These matrices have been obtained through a complex work of optimization performed using several character sets from many different people.
  • the next field of the character code contains the frame position. the number of strokes of the character and, if the character is a member of the learning sets, its identifier, provided by the user.
  • the last field is the character vector code information. This includes the following vector parameters:
  • the DIN parameter which describes the position of each point in the STK format respect to the previous one is determined in the following manner.
  • the number of DIN values for each character is Nd (number of points-1). Since the DIN values range between 1 and 10, a 4-bit value is needed to represent each of them.
  • the APPX and APPY parameters represent the position of each point in the STK character format respect to the first, central and last points. As previously indicated, pen-up and pen-down co-ordinates are not considered. Assuming: (X,Y) is the co-ordinates of the current point; (Xo,Yo) is the co-ordinates of the first point; (Xc,Yc) is the co-ordinates of the central point; and (Xn,Yn) is the co-ordinates of the last point.
  • the number of APPX and APPY values for each character is Nd, as for DIN parameters. Since the possible values range between 1 and 8 and a 3-bit value is needed to store the APPX/APPY code 2*3*Nd bits are required.
  • Xmax is the maximum value of the X-co-ordinate in the STK character format
  • Ymax is the maximum value of the Y-co-ordinate in the STK character format
  • Xmin is the minimum value of the X-co-ordinate in the STK character format
  • Ymin is the minimum value of the Y-co-ordinate in the STK character format
  • Y(Xmax) is the y-co-ordinate of the point which Xmax belongs to
  • X(Ymax) is the x-co-ordinate of the point which Ymax belongs to
  • Y(Xmin) is the y-co-ordinate of the point which Xmin belongs to
  • X(Ymin) is the x-co-ordinate of the point which Ymin belongs to.
  • the 4 extremal points are considered in order of at the time at which they were written by the user and their co-ordinate values are compared.
  • the updated character code contains the following information:
  • a given character is represented by an array of integers: 99 from feature extraction, 32 from intersections, 1 for stroke number; 1 for frame position; 1 for Nd; Nd for DIN; 2Nd for APPX/APPY and 12 for REL.
  • Recognition is performed by comparing value-by-value the code of the unknown input character with the coded characters in the reference set having the same frame number. Characters with frame numbers different from that of the input character are disregarded during the comparison. This has two important advantages. First, it allows recognition of characters with the same shape and different size. Secondly, the recognition process is relatively faster because it involves a lower number of comparisons.
  • Wo is a weighting factor which describes the relative importance of the optical and vector information
  • Soj is the so called optical or topographical score
  • Sdj is the so called dynamic score.
  • This score is evaluated for each of the Ncar reference characters.
  • the Aj score is evaluated by a comparison of the DIN and APPX/APPY codes.
  • DIN, APPX, APPY and Nd the parameters refer to the input character and DINj, APPXj, APPYj and Ndj refer to the j-th character in the reference set.
  • DIN, APPX, APPY are made up of Nd elements, DINj, APPXj, APPYj and Ndj contain Ndj elements. Since, in general, NdNdj and there is a high variability in handwriting, it is not sufficient to compare the DIN and APPX/APPY values as they are, but multiple comparisons are required, each of which has to be performed after shifting the two arrays one with respect to the other. Simulation results show that the best recognition rate is obtained considering 7 different relative shifts. For instance, if:
  • APPX[i] that is i-th element of the APPX array is compared with APPXj[i] and APPY[i] is compared with APPYj[i]. The comparison is not performed if any of these elements is not available. Then, 3 different cases may occur:
  • the REL code of the input character and the RELj code of the j-th character in the reference set are compared value-by-value.
  • the overall score Sj assigned to the comparison with the j-th character in the reference database is evaluated for each of the Ncar reference characters.
  • the identifier of the reference character with the minimum score is assigned to the unknown input symbol.
  • the role played by the dynamic weighting factors Wo, Wr, Wp and by the stroke weight SW is very important.
  • the larger Wo is, the more important is the weight of the optical information in the recognition process.
  • Wr and Wp describe the relative importance of the different components of the dynamical score Sdj.
  • Another important parameter is the stroke weight SW: the larger it is, the larger is the influence of the stroke number. Therefore, depending on the application, it is possible to weight the relative importance of the optical and dynamical part of the algorithm by trimming the Wo, Wr, Wp and SW factors. For example, if the recognition algorithm is used for recognition of Japanese characters, which are generally written with the same stroke sequence, the dynamical contribution becomes very important and the Wo parameter should be smaller than that used for Roman characters.
  • the learning process which may be used to educate the computer of the handwriting styles of the user is explained in more detail below. If the user does not wish to educate the system he may rely on recognition being effected by virtue of a reference set either permanently stored on the system or that of another user.
  • the Learning process consists in collecting a suitable number of examples for each symbol the user wants to be recognized. These symbols will form the reference set during the recognition process.
  • Two different learning (or training) modes are available. The first is known as the built-in learning mode. In this mode, the program requests the user to write examples of the symbols listed in a command file. This file contains a complete list of the symbols to be recognized with the corresponding frame constraints. The list of the symbols and the frame constraints can be changed by the user, depending upon the symbols he/she wants to be recognized and his/her handwriting style. Multiple frame constraint values are allowed for each symbol. In order to avoid insertion of bad characters in the learning set the program will reject characters with frame positions different from that stated in the file.
  • the second mode is the interactive learning mode.
  • the user writes symbols in the order he/she likes. Then recognition is performed. Each time the recogniser fails, unrecognized symbols are automaticaily inserted in the learning (or reference) set. Also, characters with not allowed frame positions will be rejected.
  • the recognition capability can be further improved by adding other characters to the learning set through the interactive learning mode.
  • This HPCR (hand printed character recognition) project is capable of being implemented on hardware.
  • the number of black pixels has been chosen constant and equal to 16. However this is merely a prefered choice and is not intended to be limiting.
  • Dedicated chips may be employed in low-cost consumer products where powerful, state-of-the-art Micro chips are not available.
EP94830118A 1993-03-22 1994-03-18 Character recognition Expired - Lifetime EP0617381B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT93RM000179A IT1265673B1 (it) 1993-03-22 1993-03-22 Apparecchio e procedimento per il riconoscimento di caratteri manoscritti.
ITRM930179 1993-03-22

Publications (3)

Publication Number Publication Date
EP0617381A2 true EP0617381A2 (en) 1994-09-28
EP0617381A3 EP0617381A3 (en) 1995-02-15
EP0617381B1 EP0617381B1 (en) 2000-06-28

Family

ID=11401635

Family Applications (1)

Application Number Title Priority Date Filing Date
EP94830118A Expired - Lifetime EP0617381B1 (en) 1993-03-22 1994-03-18 Character recognition

Country Status (7)

Country Link
US (1) US5757962A (un)
EP (1) EP0617381B1 (un)
JP (1) JPH06325212A (un)
KR (1) KR100308856B1 (un)
DE (1) DE69425009T2 (un)
IT (1) IT1265673B1 (un)
TW (1) TW321747B (un)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144764A (en) * 1997-07-02 2000-11-07 Mitsui High-Tec, Inc. Method and apparatus for on-line handwritten input character recognition and recording medium for executing the method
WO1999010834A1 (en) * 1997-08-27 1999-03-04 Cybermarche, Inc. A method and apparatus for handwriting capture, storage, and ind exing
US6640337B1 (en) * 1999-11-01 2003-10-28 Koninklijke Philips Electronics N.V. Digital television (DTV) including a smart electronic program guide (EPG) and operating methods therefor
US7295193B2 (en) * 1999-12-23 2007-11-13 Anoto Ab Written command
US7298903B2 (en) * 2001-06-28 2007-11-20 Microsoft Corporation Method and system for separating text and drawings in digital ink
WO2003023696A1 (en) 2001-09-12 2003-03-20 Auburn University System and method of handwritten character recognition
AUPR824301A0 (en) * 2001-10-15 2001-11-08 Silverbrook Research Pty. Ltd. Methods and systems (npw001)
US9285983B2 (en) * 2010-06-14 2016-03-15 Amx Llc Gesture recognition using neural networks
US20130011066A1 (en) * 2011-07-07 2013-01-10 Edward Balassanian System, Method, and Product for Handwriting Capture and Storage
CN104866117B (zh) * 2015-06-02 2017-07-28 北京信息科技大学 基于图形拓扑特征进行识别的纳西东巴象形文字输入方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3822671A1 (de) * 1988-07-05 1990-01-11 Kromer Theodor Gmbh & Co Kg Verfahren und vorrichtung zum elektronischen vergleichen von linienzuegen

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4773098A (en) * 1980-05-27 1988-09-20 Texas Instruments Incorporated Method of optical character recognition
EP0092381B1 (en) * 1982-04-15 1989-04-12 Kabushiki Kaisha Toshiba Pattern features extracting apparatus and method and pattern recognition system
JPS5975375A (ja) * 1982-10-21 1984-04-28 Sumitomo Electric Ind Ltd 文字認識装置
JPS63311583A (ja) * 1987-06-15 1988-12-20 Fuji Xerox Co Ltd 手書き文字認識システム
JPH01183793A (ja) * 1988-01-18 1989-07-21 Toshiba Corp 文字認識装置
US5058182A (en) * 1988-05-02 1991-10-15 The Research Foundation Of State Univ. Of New York Method and apparatus for handwritten character recognition
US5105468A (en) * 1991-04-03 1992-04-14 At&T Bell Laboratories Time delay neural network for printed and cursive handwritten character recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3822671A1 (de) * 1988-07-05 1990-01-11 Kromer Theodor Gmbh & Co Kg Verfahren und vorrichtung zum elektronischen vergleichen von linienzuegen

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
INTERNATIONAL JOURNAL OF MINI AND MICROCOMPUTERS, vol.15, no.1, 1993, ANAHEIM, CALIFORNIA US pages 23 - 30, XP372526 A. EL-GWAD ET AL 'Automatic recognition of handwritten arabic characters' *
PATTERN RECOGNITION, vol.8, no.2, April 1976, ELMSFORD, NY pages 87 - 98 W. STALLINGS 'Approaches to Chinese character recognition' *
PROCEEDINGS OF THE IEEE, vol.68, no.4, April 1980, NEW YORK US pages 469 - 87 C. Y. SUEN 'Automatic recognition of handprinted characters - the state of the art' *

Also Published As

Publication number Publication date
IT1265673B1 (it) 1996-11-22
KR940022338A (ko) 1994-10-20
EP0617381B1 (en) 2000-06-28
EP0617381A3 (en) 1995-02-15
DE69425009D1 (de) 2000-08-03
JPH06325212A (ja) 1994-11-25
TW321747B (un) 1997-12-01
US5757962A (en) 1998-05-26
DE69425009T2 (de) 2001-03-08
ITRM930179A0 (it) 1993-03-22
ITRM930179A1 (it) 1994-09-22
KR100308856B1 (ko) 2001-12-28

Similar Documents

Publication Publication Date Title
US6011865A (en) Hybrid on-line handwriting recognition and optical character recognition system
US5673337A (en) Character recognition
US7437001B2 (en) Method and device for recognition of a handwritten pattern
US5841902A (en) System and method for unconstrained on-line alpha-numerical handwriting recognition
US5784490A (en) Method and apparatus for automated recognition of text embedded in cluttered observations
US5742705A (en) Method and apparatus for character recognition of handwritten input
EP1630723A2 (en) Spatial recognition and grouping of text and graphics
KR100454541B1 (ko) 수기 문자 인식 방법 및 시스템
EP1971957B1 (en) Methods and apparatuses for extending dynamic handwriting recognition to recognize static handwritten and machine generated text
US20040096105A1 (en) Method, device and computer program for recognition of a handwritten character
US20070009155A1 (en) Intelligent importation of information from foreign application user interface using artificial intelligence
CN108664975B (zh) 一种维吾尔文手写字母识别方法、系统及电子设备
EP0617381B1 (en) Character recognition
Shilman et al. Recognition and grouping of handwritten text in diagrams and equations
CN114241486A (zh) 一种提高识别试卷学生信息准确率的方法
EP0548030B1 (en) Character recognition
JP4648084B2 (ja) 記号認識方法及び装置
Izadi et al. A review on Persian script and recognition techniques
JP3015137B2 (ja) 手書文字認識装置
CN116129446A (zh) 基于深度学习的手写中文字体识别方法
JPS6293776A (ja) 情報認識装置
JPH07271917A (ja) 手書き文字認識辞書作成方法および装置
JPH04353964A (ja) 文書作成装置
JPH10187880A (ja) 文字読取装置およびその文字読取処理を記憶した記憶媒体
JPH041385B2 (un)

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB NL

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB NL

17P Request for examination filed

Effective date: 19950609

17Q First examination report despatched

Effective date: 19980911

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB NL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20000628

REF Corresponds to:

Ref document number: 69425009

Country of ref document: DE

Date of ref document: 20000803

ET Fr: translation filed
NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20080307

Year of fee payment: 15

Ref country code: DE

Payment date: 20080331

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20090206

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20091130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091123

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20100318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100318