I constantly seek out valuable connections, especially on LinkedIn. I have realized how important building a strong professional network is for career growth. However, finding the right professionals to follow can be difficult and time-consuming.
A professional networking recommendation app simplifies this by analyzing large datasets to suggest relevant connections based on career goals.
In this article, you’ll build a professional networking recommendation application that automates this process. It analyzes a dataset of LinkedIn profiles and suggests relevant professionals based on your career goals.
The tutorial covers:
- Getting over 521.72M records of LinkedIn data from Bright Data.
- Using Flask to serve recommendations via an API.
- Using Ollama to generate AI-driven suggestions.
- Creating a simple user interface using Streamlit.
Prerequisites
- Basic knowledge of Python and APIs.
- A Bright Data account to access LinkedIn datasets.
- Python 3.8+ installed on your system.
Getting the LinkedIn Dataset from Bright Data
To build the recommendation engine, you need a dataset of LinkedIn profiles. Bright Data provides fresh, structured datasets tailored to your business needs from popular websites that can be downloaded in JSON or CSV format without stressing about maintaining scrapers or bypassing blocks.
Follow these steps to get the data:
1. Accessing Bright Data’s Dashboard
- Log in to your Bright Data dashboard.
- If you don’t have an account, sign up and verify your email.
2. Navigate to the LinkedIn Dataset
- On the sidebar menu, click on “Web Datasets.”
- Next, click on “Dataset Marketplace” to explore available datasets.
3. Search for “LinkedIn” on the search bar to see all the available LinkedIn datasets.
4. Click on the “LinkedIn people profiles” dataset.
5. Filter the dataset you need, i.e., you can get only profiles of people in the US or profiles of professionals in a particular field, or contact Bright Data to get any custom dataset of your choice.
NB: You can also download sample data, which you can play around with before buying fresh and updated datasets.
3. Purchase and Download the Dataset
- Click on “Proceed to Purchase” to acquire the dataset.
- You can choose between JSON or CSV formats (this article uses the CSV format).
- Once purchased, the dataset will be downloaded to your local machine. Rename it
linkedin_dataset.csvfor easy access.
4: Verify the Dataset
- Open the CSV file in a text editor or spreadsheet tool.
- Ensure it contains structured data with proper column names.
- Move the file to your project folder (e.g.,
data/linkedin_dataset.csv). This will be created in the next step.
Once the dataset is ready, you can load it into your Python application.
Building an AI-powered Professional Networking Recommendation Application with Bright Data LinkedIn Dataset
📂 Project Setup
Step 1: Create the Project Folder
Organize the project files using the following structure:
professional-networking-recommendation-engine/
│── data/
│ ├── linkedin_dataset.csv # Bright Data's LinkedIn dataset
│── backend/
│ ├── main.py # API to process user input and generate recommendations
│ ├── requirements.txt # Python dependencies
│── frontend/
│ ├── app.py # Streamlit-based UI for user input and displaying
Step 2: Set Up a Virtual Environment
Navigate to the project directory and create a virtual environment:
cd professional-networking-recommendation-enginepython -m venv venv
Activate the environment:
- Windows:
venv\Scripts\activate
- macOS/Linux:
source venv/bin/activate
Step 3: Install Dependencies
Go to the backend/ folder and create a requirements.txt file with the following dependencies:
ollama
flask
pandas
streamlit
requests
Save and then install them
pip install -r backend/requirements.txt
Step 4: Bright Data LinkedIn Dataset
Go to the backend/ folder and create a data folder. Place your downloaded LinkedIn dataset from Bright Data in this folder.
⚙️Running the Ollama Phi3 Model Locally
To generate AI-driven recommendations, you’ll use the Ollama Phi3 model. Ollama provides an easy way to run AI models like Llama 3.3, DeepSeek-R1, Phi-4, Mistral, Gemma 2, etc, locally for free. This section will guide you through installing and running the Phi3 model on your machine.
Step 1: Install Ollama
Ollama provides a simple CLI tool to run large language models (LLMs) locally. Install it by following the instructions for your operating system:
Windows (PowerShell)
iwr -useb https://ollama.ai/install.ps1 | iex
Linux (Curl)
curl -fsSL https://ollama.ai/install.sh | sh
macOS (Homebrew)
brew install ollama
Once installed, verify the installation by running:
ollama - version
Step 2: Download the Phi3 Model
Now, pull the Phi3 model to your local system:
ollama pull phi3
This will download the model and make it available for local execution.
Step 3: Run the Ollama Model
You can now test the model by running:
ollama run phi3
This starts an interactive chat session where you can input text prompts and receive AI-generated responses.
NB: Always ensure that the Ollama model is running locally when you run your code. If not, you won’t be able to query the AI model.
Next, implement the code for the backend service.
🛠️ Setting Up the Backend with Flask
The backend serves as the core of the recommendation engine. It processes user input, retrieves relevant LinkedIn profiles, and returns AI-generated recommendations. This section walks through setting up a Flask API to handle these operations.
Step 1: Create the Flask API
Navigate to the backend/ folder and create a new file called main.py. This script will:
- Load the LinkedIn dataset
- Process user input
- Use Ollama to generate AI-driven recommendations
- Return results via an API
1. Import Dependencies
Open main.py and add the following imports:
import pandas as pd
import json
import ollama
from flask import Flask, request, jsonify
2. Load the LinkedIn Dataset
Ensure your dataset is placed inside the data/ directory. Then, load it into a Pandas DataFrame:
df = pd.read_csv("../data/linkedin_dataset.csv")
3. Define the Recommendation Function
This function will generate a structured prompt and query the Ollama AI model:
def generate_prompt(user_goal):
"""Generate a structured prompt for the phi3 model."""
prompt = f"""
You are an expert career coach. Given the user's goal: "{user_goal}",
suggest 10 professionals from the dataset who would be great to follow on
LinkedIn.
Provide recommendations based on their position, experience, and
influence.
Return results in JSON format with name, position, current company, and
LinkedIn URL.
"""
return prompt
Query the AI model:
def get_recommendations(user_goal):
"""Query phi3 with a structured prompt."""
prompt = generate_prompt(user_goal)
response = ollama.chat(
model="phi3",
messages=[{"role": "user", "content": prompt}]
)
# Print raw response for debugging
print("Raw response from Ollama:", response)
try:
content = response["message"]["content"] # Extract text response
# Remove Markdown code block markers if present
if content.startswith("```json"):
content = content[7:] # Remove the starting "```json"
if content.endswith("```"):
content = content[:-3] # Remove the ending "```"
recommendations = json.loads(content.strip()) # Convert text to JSON
return recommendations
except (json.JSONDecodeError, KeyError):
return {"error": "Failed to parse model response. Check the output format."}
4. Set Up the Flask API
Now, define the Flask app and the /recommend endpoint:
# Flask API
app = Flask(__name__)
@app.route("/recommend", methods=["POST"])
def recommend():
data = request.json
user_goal = data.get("goal")
if not user_goal:
return jsonify({"error": "Goal is required."}), 400
recommendations = get_recommendations(user_goal)
return jsonify(recommendations)
if __name__ == "__main__":
app.run(debug=True)
Step 2: Run the API
Start the server by running:
cd backendpython main.py
Step 3: Test the API
Use Postman or cURL to send a test request:
curl -X POST "http://127.0.0.1:5000/recommend" -H "Content-Type: application/json" -d '{"goal":
The API should return a JSON response with recommended professionals.
So everything works,
💪 Building the Frontend with Streamlit
The frontend provides an interactive user interface where users can enter their career goals and receive recommendations. Streamlit makes UI development easy with minimal code.
Step 1: Create the Streamlit App
Navigate to the frontend/ folder and create a file named app.py. This script will:
- Accept user input for career goals
- Send a request to the backend API
- Display recommended professionals
Open app.py and add the following code:
import streamlit as st
import requests
def get_recommendations(user_goal):
"""Send a request to the backend API and fetch recommendations."""
url = "http://127.0.0.1:5000/recommend"
response = requests.post(url, json={"goal": user_goal})
if response.status_code == 200:
return response.json()
else:
return {"error": "Failed to fetch recommendations."}
def main():
st.title("🔍 LinkedIn Professional Recommendation Engine")
st.write("Enter your career goal below, and we'll suggest professionals you should follow on LinkedIn.")
user_goal = st.text_input("🎯 Your Career Goal:")
if st.button("Get Recommendations 🚀"):
if user_goal:
response = get_recommendations(user_goal)
if "error" in response:
st.error(response["error"])
else:
st.subheader("✅ Recommended Professionals:")
for person in response:
with st.container():
st.markdown(f"### {person['name']}")
st.markdown(f"**Position:** {person['position']}")
st.markdown(f"**Company:** {person['current_company']}")
st.markdown(f"🔗 [LinkedIn Profile]({person['linkedin_url']})", unsafe_allow_html=True)
st.markdown("---") # Adds a separator line between profiles
else:
st.warning("⚠️ Please enter a career goal.")
if __name__ == "__main__":
main()
Step 2: Run the Streamlit App
To start the UI, run:
NB: Ensure that both the Ollama model and the backend services are running.
cd frontendstreamlit run app.py
This launches a local Streamlit server. Open the displayed URL in a browser to interact with the recommendation engine. Now, you should get a list of recommended professionals to connect with based on your career goals.
Wrapping Up
You’ve built an AI-powered professional networking recommendation engine that helps users find relevant connections based on their career goals. The dataset for this project was obtained from Bright Data; they provide structured and updated data from various websites that you can use to make informed business decisions, improve customer relations, make strategic career moves, etc. The project also integrated a Flask backend for processing requests, Ollama’s Phi3 model for AI-driven recommendations, and a Streamlit for frontend user interaction.
This system runs entirely locally; you can improve the project at no cost. You can enhance it by:
- Getting more updated or custom datasets from Bright Data.
- Refining the AI model with custom fine-tuning.
- Deploying the application to a cloud service for broader access.