Coordination

Computes Coordination features as described in this paper.

Example usage: exploring the balance of power in the U.S. Supreme Court.

class convokit.coordination.coordination.Coordination(coordination_attribute_name: str = 'coord', speaker_thresh: int = 0, target_thresh: int = 3, utterances_thresh: int = 0, speaker_thresh_indiv: int = 0, target_thresh_indiv: int = 0, utterances_thresh_indiv: int = 0, utterance_thresh_func: Optional[Callable[[Tuple[convokit.model.utterance.Utterance, convokit.model.utterance.Utterance]], bool]] = None)

Linguistic coordination is a measure of the propensity of a speaker to echo the language of another speaker in a conversation, as defined in “Echoes of Power: Language Effects and Power Differences in Social Interaction” (http://www.cs.cornell.edu/~cristian/Echoes_of_power.html)

This Transformer encapsulates computation of coordination-based features for a particular corpus.

Coordination is a measure of power differences between speakers in a conversation, based on the propensity of a speaker to echo the same function words used by another speaker in a conversation. It is defined in Danescu-Niculescu-Mizil et al’s “Echoes of Power: Language Effects and Power Differences in Social Interaction”.

This transformer contains various functions to measure coordination on different conversational scales. Calling transform() will annotate each speaker in the corpus with their coordination to all speakers they directly reply to. The summarize() function is a convenience method that computes aggregated coordination scores between two groups of speakers.

Note: labeling method is slightly different from that used in the paper – we no longer match words occurring in the middle of other words and that immediately follow an apostrophe. Notably, we no longer separately count the “all” in “y’all.”

Parameters:
  • coordination_attribute_name – metadata attribute name to store coordination scores during the transform() step.
  • speaker_thresh – Thresholds based on minimum number of times the speaker uses each coordination marker. Speakers that do not meet the threshold are excluded from computation for a given marker.
  • target_thresh – Thresholds based on minimum number of times the target uses each coordination marker. Targets that do not meet the threshold are excluded from computation for a given marker.
  • utterances_thresh – Thresholds based on the minimum number of utterances for each speaker. Speakers that do not meet the threshold are excluded from computation for a given marker.
  • speaker_thresh_indiv – Like speaker_thresh but only considers the utterances between a speaker and a single target; thresholds whether the utterances for a single target should be considered for a particular speaker.
  • target_thresh_indiv – Like target_thresh but thresholds whether a single target’s utterances should be considered for a particular speaker.
  • utterances_thresh_indiv – Like utterances_thresh but thresholds whether a single target’s utterances should be considered for a particular speaker.
fit(corpus: convokit.model.corpus.Corpus, y=None)

Learn coordination information for the given corpus.

fit_transform(corpus: convokit.model.corpus.Corpus, y=None) → convokit.model.corpus.Corpus

Fit and run the Transformer on a single Corpus.

Parameters:corpus – the Corpus to use
Returns:same as transform
pairwise_scores(corpus: convokit.model.corpus.Corpus, pairs: Collection[Tuple[Union[convokit.model.speaker.Speaker, str], Union[convokit.model.speaker.Speaker, str]]], speaker_thresh: int = 0, target_thresh: int = 3, utterances_thresh: int = 0, speaker_thresh_indiv: int = 0, target_thresh_indiv: int = 0, utterances_thresh_indiv: int = 0, utterance_thresh_func: Optional[Callable[[Tuple[convokit.model.utterance.Utterance, convokit.model.utterance.Utterance]], bool]] = None) → convokit.coordination.coordinationScore.CoordinationScore

Computes all pairwise coordination scores given a collection of (speaker, target) pairs.

Parameters:
  • corpus – Corpus to compute scores on
  • pairs (Collection) – collection of (speaker id, target id) pairs

Also accepted: all threshold arguments accepted by score().

Returns:A CoordinationScore object corresponding to the coordination scores for each (speaker, target) pair.
precompute(corpus: convokit.model.corpus.Corpus)

Deprecated. Use fit() instead.

score_report(corpus: convokit.model.corpus.Corpus, scores: convokit.coordination.coordinationScore.CoordinationScore)

Deprecated. Use summarize() instead.

summarize(corpus: convokit.model.corpus.Corpus, speaker_selector: Callable[[convokit.model.speaker.Speaker], bool] = <function Coordination.<lambda>>, target_selector: Callable[[convokit.model.speaker.Speaker], bool] = <function Coordination.<lambda>>, focus: str = 'speakers', summary_report: bool = False, speaker_thresh: Optional[int] = None, target_thresh: Optional[int] = None, utterances_thresh: Optional[int] = None, speaker_thresh_indiv: Optional[int] = None, target_thresh_indiv: Optional[int] = None, utterances_thresh_indiv: Optional[int] = None, utterance_thresh_func: Optional[Callable[[Tuple[convokit.model.utterance.Utterance, convokit.model.utterance.Utterance]], bool]] = None, split_by_attribs: Optional[List[str]] = None, speaker_utterance_selector: Callable[[Tuple[convokit.model.utterance.Utterance, convokit.model.utterance.Utterance]], bool] = <function Coordination.<lambda>>, target_utterance_selector: Callable[[Tuple[convokit.model.utterance.Utterance, convokit.model.utterance.Utterance]], bool] = <function Coordination.<lambda>>, speaker_attribs: Optional[Dict[KT, VT]] = None, target_attribs: Optional[Dict[KT, VT]] = None) → convokit.coordination.coordinationScore.CoordinationScore

Computes a summary of the coordination scores by giving an aggregated score between two groups of speakers.

The threshold parameters may be used to override the thresholds set in the constructor. If a threshold parameter is not explicitly set, it will take on the value provided in the constructor.

Additionally, this method provides optional options to tweak the method by which scores are aggregated. The focus parameter is used to aggregate scores relative to either speakers or targets. split_by_attribs, speaker_attribs and target_attribs are used to specify whether to summarize scores for particular subgroups of speakers or targets.

Parameters:
  • corpus – Corpus to compute scores on
  • speaker_selector – A lambda function that takes a speaker and returns True or False depending on whether the speaker should be included in the group of speakers we want to compute scores for.
  • target_selector – A lambda function that takes a speaker and returns True or False depending on whether the speaker should be included in the group of targets.
  • focus – Either “speakers” or “targets”. If “speakers”, treat the set of targets for a particular speaker as a single person (i.e. concatenate all of their utterances); the returned dictionary will have speakers as keys. If “targets”, treat the set of speakers for a particular target as a single person; the returned dictionary will have targets as keys. See the example notebook for typical usage.
  • summary_report – if True, return a dictionary of key global

coordination statistics. Otherwise, return a dictionary of speaker scores. :param speaker_thresh: Thresholds based on minimum number of times the speaker uses each coordination marker. :param target_thresh: Thresholds based on minimum number of times the target uses each coordination marker. :param utterances_thresh: Thresholds based on the minimum number of utterances for each speaker. :param speaker_thresh_indiv: Like speaker_thresh but only considers the utterances between a speaker and a single target; thresholds whether the utterances for a single target should be considered for a particular speaker. :param target_thresh_indiv: Like target_thresh but thresholds whether a single target’s utterances should be considered for a particular speaker. :param utterances_thresh_indiv: Like utterances_thresh but thresholds whether a single target’s utterances should be considered for a particular speaker. :param utterance_thresh_func: Optional utterance-level threshold function that takes in a speaker Utterance and the Utterance the speaker replied to, and returns a bool corresponding to whether or not to include the utterance in scoring. :param split_by_attribs: Utterance meta attributes to split speakers by when tallying coordination (e.g. in supreme court transcripts, you may want to treat the same lawyer as a different person across different cases — see coordination examples) :param speaker_utterance_selector: A lambda function that takes a speaker and target utterance pair and returns True or False for whether the speaker utterance should be considered. Useful for filtering the set of utterances before processing. :param target_utterance_selector: A lambda function that takes a speaker and target utterance pair and returns True or False for whether the target utterance should be considered. Useful for filtering the set of utterances before processing.

Returns:If summary_report=True, returns a CoordinationScore

object corresponding to the coordination scores for each speaker. This object is a dictionary mapping each speaker to its aggregated coordination score to all speakers in the opposite group. If summary_report=False, returns a dictionary of summary statistics for the coordination scores across each marker, the overall coordination score under each of three aggregation methods (described in the paper), and the count (sample size) for the statistics under the various aggregation methods.

transform(corpus: convokit.model.corpus.Corpus) → convokit.model.corpus.Corpus

Generate coordination scores for the corpus you called fit on.

Each speaker’s coordination attribute will be a dictionary from targets to coordination scores between that speaker and target.