...
- Add an output connector :
Click on List Output Connectors.
Then click on the button Add a ne output connector.
The configuration is exactly the same that for the standard output connector that already exists named DatafariSolr so you can copy/paste all the configuration except the Paths tab configuration and obviously the name of the output connector : in this example Solr. Change the update handler field to /update/website (instead of /update/extract).
We will see at the end of the tutorial in what consists the configuration of that handler in Solr. - Add a repository connector
Now that we have the configuration for the Solr handler we have to add the Web repository connector and the associated job. So click on List Repository Connectors. Then click on Add new connection.
Name : what you want, in this example Web.
Connection Type : Web
Authority group : None
For the other values, I invite you to read the MCF documentation about it. You can leave the default configuration, just add an email address to identify the crawler in the website that you will crawl.
Info | ||
---|---|---|
| ||
Your web crawling can strongly depend on the web sources you intend to crawl. On the connector side, you can start your tests with the following values, but beware that you may be temporarily banned by the sources if they deem you to be too aggressive : Go in the Bandwidth tab of your repository connector:
|
- Add a job connector
We can now add the job linked to the repository connector and the output connector that we just added.
Let's see the configuration :
Name : choose whatever you want. In this example : DatafariWebSite
Connection : choose the repository and the output connection that we just created previously
Seeds : the website that you want to index. Here we enterd http://www.datafari.com
Inclusions : we only want to parse HTML pages so we enter ".*" for the textfield Include in Crawl and ".html$" for the textfield Include in index.
For the others tab you can let the default parameters.
...