Abstract
The flourishing use of multiple social networks has been verified by the GWI survey: Internet users have an average of 5.54 social media accounts with 2.82 being used actively. In general, the views expressed by users in different social sites are consistent or complementary rather than independent. Hence, piecing together social signals of the same users across social networks, if respected, can benefit the downstream analytics. Despite its value and significance, far too little attention to date has been paid to jointly explore and model these three facts: 1) source consistency, 2) source complementarity, and 3) source confidence. Towards this end, we propose a novel model, which co-regularizes the aforementioned three facts in a unified model to strengthen the learning performance over multiple social networks. In addition, we have theoretically derived its solution, and verified our model on the application of user interest inference from multiple social networks, namely, Facebook, Quora, Twitter, and LinkedIn. Extensive experiments have justified its superiority over several state-of-the-art competitors.
Dataset
The dataset used in our experiments includes 1,607 users' profiles in Twitter, Facebook, Quora and LinkedIn. This dataset is available here.
Code
Apart from component-wise evaluation, we carried out experiments to compare the overall effectiveness of our proposed MUSCLE model with several state-of-the-art source fusion approaches.
EarlyFusion | This method concatenated the features extracted from different sources into a single feature vector and then fed the early fused data into a support vector machine (SVM) model | Code |
LateFusion | We fist trained three SVM models separately on Twitter, Facebook and Quora data. We then linearly integrated their predicted results with optimal fusing parameters. | Code |
MSNL | This multiple social network learning (MSNL) model was initially proposed to infer the volunteerism tendency of the given user by leveraging source consensus. | Code |
MvDA | This multi-view discriminant analysis (MvDA) method learns a single unified discriminant common space for multiple views by jointly optimizing multiple view-specific transforms, one for each view. | Code |
MvDA-VC | This is an extended version of MvDA by incorporating the view consistency constrains. | Code |
MUSCLE | we propose a novel MUltiple SoCiaL nEtwork learning model, MUSCLE for short. It is able to jointly co-regularize the source confidence, consistency and complementarity. | Code |