How to Build a Professional Networking Recommendation Engine

Let’s build an AI-powered LinkedIn recommendation engine to find the right connections using Bright Data LinkedIn Dataset and Ollama.

ByVictor Yakubu
Published on

Frequently Asked Questions

Common questions about this topic

What does the professional networking recommendation application do?
The application analyzes a dataset of LinkedIn profiles and suggests relevant professionals to follow based on a user's career goals, returning AI-driven recommendations.
What dataset is required to build the recommendation engine?
A LinkedIn people profiles dataset from Bright Data is required; the article uses a CSV-format dataset named linkedin_dataset.csv stored in a data/ folder.
What prerequisites are needed to follow the tutorial and run the project?
Basic knowledge of Python and APIs, a Bright Data account to access LinkedIn datasets, and Python 3.8+ installed on the system are required.
How is the project directory organized?
The recommended structure is professional-networking-recommendation-engine/ with data/linkedin_dataset.csv, backend/main.py and backend/requirements.txt, and frontend/app.py.
Which Python dependencies are listed for the backend and frontend?
The dependencies listed are ollama, flask, pandas, streamlit, and requests, installed via pip from backend/requirements.txt.
How does the application generate AI-driven recommendations?
The backend constructs a structured prompt describing the user's goal and queries the locally running Ollama phi3 model via ollama.chat to produce JSON-formatted recommendations.
What does the Flask backend do and which endpoint does it expose?
The Flask backend loads the LinkedIn CSV into a pandas DataFrame, processes user input, queries the Ollama model, and exposes a POST /recommend endpoint that accepts JSON with a 'goal' field and returns recommendations.
How does the Flask /recommend endpoint handle missing input?
If the request JSON does not include a 'goal', the endpoint returns a 400 response with an error message stating 'Goal is required.'
What format does the Ollama model response need to be in for the backend to parse it?
The Ollama model should return JSON-formatted text containing recommendation entries with fields such as name, position, current_company, and linkedin_url; the backend strips surrounding Markdown code fences before parsing.
How is the frontend implemented and how does it interact with the backend?
The frontend is implemented with Streamlit in frontend/app.py; it accepts a user's career goal, sends a POST request to http://127.0.0.1:5000/recommend with JSON {"goal": user_goal}, and displays the returned recommendations or error messages.
What must be running locally for the system to function end-to-end?
The Ollama phi3 model must be running locally and the Flask backend server must be running so the frontend can call the /recommend API and obtain AI-generated recommendations.
How can the project be improved as suggested in the tutorial?
Suggested improvements include obtaining more updated or custom datasets from Bright Data, refining the AI model with custom fine-tuning, and deploying the application to a cloud service for broader access.
What happens if the backend fails to parse the Ollama model response?
If JSON decoding or expected keys are missing, the backend returns an object with an error field: {"error": "Failed to parse model response. Check the output format."}.

Enjoyed this article?

Share it with your network to help others discover it

Last Week in Plain English

Stay updated with the latest news in the world of AI, tech, business, and startups.

Interested in Promoting Your Content?

Reach our engaged developer audience and grow your brand.

Help us expand the developer universe!

This is your chance to be part of an amazing community built by developers, for developers.