15,611,107 members
0.00/5 (No votes)
See more: , +
I'm using a data represented as sparse matrix where column contains numerical values for "article_Id" "word_id" and "count" as follow:

Python
```article_Id	word_id	count
0	1	3	1
1	1	10	1
2	1	12	8
3	1	17	1
4	1	23	8```

I'm representing each document by TF-IDF vectors of the top 100 coordinates. Next, I'm randomly selecting a subset of 200 documents and computing the similarity of those 200 documents.

Next, the task is to store the tf-idf in a 1000 x 100 dimensional matrix. Here, 1000 is the number of documents and 100 is the top 100 most occurring words. These words will be different for each document, so we have to select top 100 words from whole vocabulary (all documents combined) and compute the tf-idf of these words only for each document.

Since tf-idf is calculated separately for documents, I'm unable to get it for top 100 occurrences from whole vocabulary. Any idea?

What I have tried:

Here's what my code looks like right now.

Python
```word_ids = []
article_ids = []
data = {}

document_word_counters = [0] * 1000
word_articles = {}

with open("C:/Users/Mehreen/Desktop/Dataset/20 newsgroups data/data50.csv","r") as f:
for line in f.readlines():
line_arr = line.split(',')
word_ids.append(int(line_arr[1]))

if line_arr[1] not in word_articles:
word_articles[line_arr[1]] = 1
else:
word_articles [line_arr[1]] += 1

if line_arr[0] not in data:
article_ids.append(int(line_arr[0]))
data[line_arr[0]] = {}
document_word_counters[int(line_arr[0]) - 1] = int(line_arr[2])

article = data[line_arr[0]]
document_word_counters[int(line_arr[0]) - 1] += int(line_arr[2])
article[line_arr[1]] = int(line_arr[2])```

Here's what I'm doing to compute tf-idf.
Python
```word_ids = np.unique(word_ids).tolist()
document_vectors = []

for article in article_ids:
if str(article) in data:
document_vector = []
article_data = data[str(article)]
document_vector = [0] * len(word_ids)
for key, value in article_data.items():
index = word_ids.index(int(key))
tf = value/ document_word_counters[article-1]
idf = 1000 / word_articles[key]
#             print (' so its tf becomes %f and idf %f' %(tf,idf))
document_vector[index] = tf * idf
document_vectors.append(np.sort(document_vector)[::-1])

tf_idf_matrices = []

for k in range(100):
document_count = 0
tf_idf_matrix = [[0] * len(word_ids)] * 1000
for document_vector in document_vectors:
tf_idf_matrix[document_count] = document_vector[:k]
document_count+=1
tf_idf_matrices.append(tf_idf_matrix)```

and later random sample can be easily chosen with random.sample() function.
Posted

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Top Experts
Last 24hrsThis month
 OriginalGriff 90 Richard MacCutchan 40 l o o l 10 Member 15901691 10 k5054 10
 OriginalGriff 2,576 Richard MacCutchan 1,156 Graeme_Grant 705 Dave Kreskowiak 631 CHill60 285

CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900