facebookresearch / TaBERT

This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).

Summary
email_034-attachment-send-file-code-cssCreated with Sketch.
Main Code: 5,706 LOC (31 files) = PY (98%) + JAVA (<1%) + YML (<1%)
Secondary code: Test: 0 LOC (0); Generated: 0 LOC (0); Build & Deploy: 75 LOC (5); Other: 771 LOC (7);
Artboard 48 Duplication: 5%
File Size: 29% long (>1000 LOC), 29% short (<= 200 LOC)
Unit Size: 13% long (>100 LOC), 36% short (<= 10 LOC)
Conditional Complexity: 17% complex (McCabe index > 50), 38% simple (McCabe index <= 5)
Logical Component Decomposition: primary (7 components)
files_time

1 year, 6 months old

  • 100% of code older than 365 days
  • 100% of code not updated in the past 365 days

0% of code updated more than 50 times

Also see temporal dependencies for files frequently changed in same commits.

Goals: Keep the system simple and easy to change (4)
Straight_Line
Features of interest:
TODOs
4 files
Commits Trend

Latest commit date: 2021-08-11

0
commits
(30 days)
0
contributors
(30 days)
Commits

1

8

Contributors

1

3

2021 2020
show commits trend per language
Reports
Analysis Report
Trend
Analysis Report
76_startup_sticky_notes
Notes & Findings
Links

generated by sokrates.dev (configuration) on 2022-01-25