Full text indexes and normal T-SQL

  • I have a couple options but am too uninformed to know which road to take. My options are: completely rewrite an application using full-text-indexing OR add full-text-indexing to the fields and hope that the existing queries use the index. So will normal SQL keywords like LIKE % and CHARINDEX(' internal' , productname) > 0 work with or use the full text indexes?

    Many thanks

  • I don't believe LIKE and CHARINDEX will use the FULL TEXT Index; the keywords to use a FULL TEXT index are, contains, containstable, freetext, and freetexttable.

    Thanks.

    Mohit.

    [font="Arial"]---

    Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
    Microsoft FTE - SQL Server PFE

    * Some time its the search that counts, not the finding...
    * I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]

    How to ask for help .. Read Best Practices here[/url].

  • 1. Read the BOL as it describes the basic of full-text, architecture, etc.

    2. See Hilary Cotter's nice articles at http://www.simple-talk.com/sql/learn-sql-server/sql-server-full-text-search-language-features/

    3. Use MSDN for additional research. See: http://msdn.microsoft.com/en-us/library/ms345119.aspx

    4. Search the forums as this has been discussed at length.


    [font="Arial Narrow"](PHB) I think we should build an SQL database. (Dilbert) What color do you want that database? (PHB) I think mauve has the most RAM.[/font]

  • Thanks for the replies. My largest column contains close to 900 bytes. I can change the ntext to varchar but then I get an error about placing a non-clustered index on this field. The error says that the field is too large to index.

    Is there a way for example maybe using some sort of hashing algorithm to get around this limitation? I really do not want to place full text indexes on these fields.

    thanks !

  • brd123 (3/26/2009)


    Thanks for the replies. My largest column contains close to 900 bytes. I can change the ntext to varchar but then I get an error about placing a non-clustered index on this field. The error says that the field is too large to index.

    Is there a way for example maybe using some sort of hashing algorithm to get around this limitation? I really do not want to place full text indexes on these fields.

    thanks !

    See the BOL (which should be your friend) for limitations on what can be indexed and index size.

    Indexing the column will not improve your wildcard (LIKE, etc.) queries on the column as the index will not be used.

    Your best bet is to use a full-text index and the CONTAINS function with the appropriate changes in your application.

    You won't be able to use a view. But you will be able to perform a query that equates to the view. I.e., the query that was used to build the view. For example:

    SELECT

    a.column1,

    a.column2,

    b.column1,

    b.column2

    ...

    FROM

    TableA a

    INNER JOIN TableB b

    ON a.column = b.column

    WHERE CONTAINS(a.column1, 'full-text search terms')

    AND ...


    [font="Arial Narrow"](PHB) I think we should build an SQL database. (Dilbert) What color do you want that database? (PHB) I think mauve has the most RAM.[/font]

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply